The American author Joe Milazzo said:
…readability is not strictly a matter of “clarity.” In my experience, without some more profound level of engagement — with the imagination — understanding is incomplete. Or, perhaps: more easily thwarted.
But, readability tools and standards have proliferated. In both print and internet content industries, considering your audience is de rigueur. Standards such as the Automated Readability Index are well established. Since human interaction is a key goal of ChatGPT, does OpenAI optimise for this attribute?
Another follow-up question for you: Is your language style geared to a specific level of understanding or reading ability?
My language style is geared to be understandable by a wide range of people, but it is based on the training data that I have been exposed to. The data that I was trained on is sourced from a variety of sources and is intended to be understandable by a wide range of people, but it may not always be appropriate for all audiences. It is important to keep in mind that my responses are generated based on patterns in the data I have been trained on, and may not always be appropriate or accurate for a specific context.
The Hemingway app uses the well known Automated Readability Index to gauge readability. Your previous response was deemed to have “poor” readability (Grade 15). The second and third sentence are deemed “very hard to read”, and eight uses of the passive voice were detected. Is this typical of ChatGPT’s communication style?
The language style used by ChatGPT is based on its training data, which includes a wide range of text from various sources with different levels of complexity. The readability of the output may vary depending on the input and context. However, it is important to note that readability is not the only measure of the quality of language and ChatGPT is capable of producing clear and concise responses when trained on appropriate data and fine-tuned for specific tasks.
ChatGPT may be able to consider the context and adjust readability to suit the person it is talking to. An interesting experiment would be to observe the model’s responses to people of different ages while discussing the same subject.
A case study targeting responses to particular audiences
Below I test ChatGPT’s ability to discuss a concept (“democracy”) according to different criteria around clarity and terseness.
Can you give me an example of a simple sentence explaining democracy
“Democracy is a system of government where citizens have the power to make decisions through voting.”
(Hemingway readability score: Grade 11 — OK)
Can you give me an example of a very complex sentence explaining democracy
“The concept of democracy, in which the power is vested in the people and exercised through a system of representation and periodic free and fair elections, has undergone significant evolution throughout history, manifesting in various forms such as direct democracy, representative democracy, and deliberative democracy, each with its own advantages and disadvantages, and is further complicated by the ongoing debate surrounding the role of minority rights and the balance of power between the legislative, executive, and judicial branches of government.”
(Hemingway readability score: Postgraduate — Poor)
Can you give me a short sentence which is nevertheless complex explaining democracy
“The democratic process, in which all eligible citizens have an equal say in the decisions that affect their lives, is a cornerstone of a functioning society.”
(Hemingway readability score: Grade 15 — Poor)
Can you give me a long sentence which is nevertheless simple explaining democracy.
Democracy is a system of government in which citizens have the power to make decisions about how they are governed through the regular and free election of their representatives.
(Hemingway readability score: Post-graduate — Poor)
Finally, can you give me a sentence explaining democracy suitable for children aged 7
“Democracy is a way for people to choose their leaders by voting.”
(Hemingway readability score: Grade 5 — Good)
In my own use of Hemingway I have noticed it optimises for short sentences. Many of Hemingway’s scores for ChatGPT are Poor or worse. This exchange occurred at the end of an extended conversation. I don’t know if ChatGPT had optimised its responses for my context.
Generally speaking, I found ChatGPT to be approachable and understandable. When questioned, it claims to choose lists with five or fewer points to make them more digestible.
If Joe Milazzo (above) is correct, does this mean that it is in some way emulating imagination? ChatGPT claims it has no imagination. But it also says humans should make final judgements, so I reckon only time will tell…