Can You Talk to AI About Complex Topics?

With the progress in natural language processing and machine learning algorithms, AI has proven to be a brilliant conversationalist — even on complicated subject matter. For example, OpenAI’s GPT-4 has shown remarkable capability in handling complex subjects throughout physics, philosophy and medicine. Even for highly specialized and complex subjects, such as the US medical licensing exam, GPT-4 managed to score in the top 10% of human takers in 2023. But even though it can handle technical content, AI only has a grasp of these subjects in a relative sense. So, for example, even though it can explain things in detail and give some insights, it would still be unable to provide personal experience or the kind of judgement that a good human expert could.

For example, GPT-4 can explain quantum mechanics or theory of relativity at a reasonably good level when it comes to scientific topics. Indeed, researchers and scientists are using AI models to consume huge swathes of scientific papers and generate summaries — a task that would take human researchers many multiples more time [6]. One such breakthrough is an AI from DeepMind that assisted in predicting protein structures, which can lead to faster drug discovery. And at a 92.4% success rate, the AI algorithm is found to be superior than earlier methods in predicting protein structure accurately.

Still, AI has great difficulty with topics that are high in subjective. Ask ai about ethics or human feelings, the answers quickly seem formulaic and emotionally very much missing. AI does not have the emotional intelligence to do such sensitive discussions about mental health and human relationships. According to a survey conducted by the National Institutes of Health (NIH), 68% of users would rather talk to ai a human professional than AI because it’s not able to comprehend complex emotional states.

Moreover, the quality of data that an AI model is trained on plays a huge role when it comes to its efficiency. An AI learns from the data it is given, and if that data was biased or incomplete cheerio. When the researchers at MIT demonstrated in 2020 that AI models could provide biased results in legal or hiring contexts, this typically happened because the data were historical (that is, assumptions about previous behaviours). So yes, this means AI is capable of participating with others in complex topics like those listed above; however, it may not always be able to do so fully accounting for all the variables and complexities that lie within said subjects.

As AI continues to get trained in a more complex manner, it will be able to deal with more sophisticated topics. However, it still needs human control to verify, think critically and be ethical. AI is a great tool for first ideas, but that might not mean that it should replace professional eyes or judgment in all cases when speaking about specialist or complex topics.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top
Scroll to Top