Why Generative AI May Not Be As Intelligent As You Think
ChatGPT, a form of generative AI, has taken the world by storm, getting its first 100 million users within the first 2 months from launch. ChatGPT is a Generative pretrained transformer. This means that it can learn from its training data, and from data collected while interacting with a user.
Users can rate the response of the AI to their prompts, and through this, the AI learns. If the AI produces a bad response, the user can rate the response, and the AI will put this data into its algorithm. However, while this technology will be revolutionary for the future development of AI, it has some potential unavoidable downsides that need to be addressed.
- Even though AI can create like humans, it doesn’t create through the same process as Humans. Humans use existing ideas and information, to come up with new information based on their wants and desires. AI, on the other hand, takes historical data of previous human works and comes up with something that matches that and the prompt given. This means that it can’t create entirely new content, but instead, it really is copying from thousands of other content, and then paraphrasing it. This limitation makes generative AI programs unsuitable for jobs that require creativity.
- Generative AI programs like ChatGPT can be biased if the training data used to teach them is biased. The AI use training data to make sense of different prompts given. The data teaches the AI what each prompt is, and what it most likely means. However, if this data is biased, the AI will be inevitably biased. A good example of this is Google’s Bard AI which stated that the James Webb Telescope took the first image of a planet out of the solar system (It was actually done at the European Southern Observatory in 2004). In addition, Generative AI programs can also gain bias based on the prompts it is given by the user. Programs like ChatGPT are transformative which means that not only do they use previously collected training data, but are also capable of learning from the various prompts given by users. While this can make the AI program even more powerful, it is not completely foolproof. If the user trolls the system by giving it biased data, the AI can become biased. A good example of this is Microsoft’s Twitter AI, Tay, which, in under a day, became racist and misogynistic. This goes to show how AI bots can be manipulated by humans and the potentially biased data that they input.
- Lastly, generative AI can be misused for malicious purposes, such as creating viruses or spreading fake information. Deepfake AI can also be used to create fake videos of celebrities or politicians saying things they never said. This poses a significant security risk and can misinform millions of people. In addition, AI programs can be hacked, which could pose serious risks to businesses that become completely reliant on them.
While AI technology has potential benefits, it also has significant challenges that need to be addressed. As AI systems continue to improve, we must be mindful of the potential drawbacks and work to mitigate them to ensure that AI is used for positive purposes.