John Bateman continued on SYSFLING 28 Nov 2024, at 21:35:
… For all such outputs, it is generally potentially useful to know the precise language model being used and the basic settings concerning 'temperature', i.e., restricted the behaviour is to the prompt, and the number of potential selections are considered as 'part of the mix' when moving to the next token. Changing these produces very different behaviour. And, of course, as now becoming increasingly relevant, the 'history' of prompts maintained for any particular interaction.
I wonder in particular about the latter as the responses of the system seem set to 'crazily over-enthusiastic puppy' mode, where any user prompt gives rise to phrases of excessive positive evaluation with personal stand-taking,
e.g., "Yes, that’s a fascinating distinction!","Ah, I love this idea!", etc.
Producing this kind of phrasing is usually the result of what is called reinforcement learning with human feedback (RLHF), where language model output is pushed towards responses that human users have rated positively along some dimensions of choice, such as 'congenial'. …
Blogger Comments:
[1] This is precisely why the ChatGPT conversation might be of interest to Systemicists. It shows one way that a dialogue can develop, with each turn depending on the meaning selections of the previous turn.
[2] Again, this precisely why the ChatGPT conversation might be of interest to Systemicists. The ChatGPT, an artefact with no experience of emotion, echoes the attitudinal semantics of the human interactant.
[3] To be clear, the high graduation of ATTITUDE in the ChatGPT turns echoes my earlier positive attitude when I had been using ChatGPT to create scenarios that continually had me laughing uproariously. The reason I appreciated the cheering up so much was that my partner of the last forty years had very recently died in our home, while I was asleep, after having suffered from early onset dementia for about three years.
No comments:
Post a Comment