

LLMs are also feeding into the current state of unwillingness to think of a lot of people. People seem to just want an answer to their questions instead of an explanation. People just want a short version of “the truth” handed to them instead of making the effort to learn, research, and think critically.
While I do believe in the usefulness of AI and the advantages it possesses, but I also think it’s very dangerous in the modern age of “information consumerism”. We should teach kids about AI but also encourage critical thinking and problem solving instead of depending on LLMs to solve our problems for us. At the end LLMs are just machines that guesses what the next word would be according to its training dataset and some sophisticated algorithms for logical “reasoning” and mathematical computions (optional).
Well technically traffic laws and markings are intended for humans not machines if human drivers were eliminated the roads could be reconfigured to better suit self-driving cars. Same thing if you try to make a robot exactly replicate humans it won’t be that good but if you make a non humanoid robotic system that’s configured for automation it will be more efficient.