A question for the theorists among us. Like so many I have been trying out ChatGPT and have been astounded at what it can do. For example, I asked it to translate into English passages from an obscure Buddhist text in Classical Chinese that I am 99.9% sure has never been translated before and it produced a reasonably good translation. That is not particularly surprising. What did surprise me is that when challenged on parts of its translation, it was able to engage in a discussion of why it had translated in a certain way, including unpacking some metaphors (I can upload some screen shots if anyone is interested). It was also able to consider and evaluate alternative translations. How does it do this without some semantic “understanding”? I know nothing about AI, but from everything I have read and heard its language processing is entirely connectionist. It trawls through huge amounts of text identifying and matching patters, and making predictions. It can do some basic parsing of syntax but, I am assured, cannot do any kind of semantic analysis, including semantically oriented functional analyses.. Any semantic “understanding” it comes to must be gleaned through identifying and comparing intra text relations, i.e. collocations. So here’s my question. Was John Sinclair right when he said that SFL greatly exaggerated the role of (paradigmatic) lexicogrammar and greatly underestimated the role of (syntagmatic) lexical collocation in generating coherent text?
Blogger Comments:
To be clear, the claim attributed to John Sinclair misunderstands SFL Theory. The syntagmatic juxtaposition of words is the realisation of choices in paradigmatic lexicogrammatical systems in logogenesis. So it is not a matter of "underestimating the role" of one and "exaggerating the role" of the other. One is the (less abstract) realisation of the other.
Postscript:
There have been more than 50 replies to this post on Sysfling, not one of them answering Graham Lock's theoretical question.