A16B-2201-0856 According to sources, OpenAI is training the next generation of artificial intelligence, working under the name “Q*” (pronounced Q-Star). In the New Year, the next generation of OpenAI products may be released
Data bottleneck refers to the limited amount of high-quality data that can be used to train AI, and synthetic data is expected to break this bottleneck. In addition to the demand for large amounts of high-quality data leading to the demand for synthetic data, data security considerations are also important reasons
A16B-2201-0856 As the world’s most powerful AI, ChatGPT has encountered bottlenecks in computing power and other aspects. In this context, discussing the application of quantum computers in the field of artificial intelligence has become a potential future solution
•AI agents and code-free software development bring “shock waves”
A16B-2201-0856 In 2023, the world witnessed the worldwide fire of ChatGPT. The advent of a new generation of artificial intelligence represented by generative artificial intelligence has changed the development trajectory of artificial intelligence (AI) technology and application, accelerated the interaction process between people and AI, and is a new milestone in the development history of artificial intelligence. What trends will the development of artificial intelligence technologies and applications show in 2024? Let’s look at some of the big trends to watch.
Trend 1: From large AI models to general AI
A16B-2201-0856 In 2023, ChatGPT developer OpenAI was put under an unprecedented spotlight, which also pushed the development of subsequent versions of GPT-4 to the forefront. According to sources, OpenAI is training the next generation of artificial intelligence, working under the name “Q*” (pronounced Q-Star). In the New Year, the next generation of OpenAI products may be released.
A16B-2201-0856 According to media Revelations, “Q*” may be the first artificial intelligence trained in a “scratch from scratch” way. The intelligence does not derive data from human activity, and it has the ability to modify its code to accommodate more complex learning tasks. The former makes the development of AI capabilities increasingly opaque, while the latter has long been seen as a necessary condition for the birth of an AI “singularity.” In the field of artificial intelligence development, “singularity” refers to the machine has the ability to iterate itself, and then develop rapidly in a short period of time, resulting in beyond human control.
Although some reports say that “Q*” can only solve elementary school difficult math problems at present, it is far from “singularity”. However, given that the speed of artificial intelligence iteration in virtual environments may be much faster than imagined, it is still possible that in the near future, it can independently develop AI that can exceed human level in all fields. In 2023, OpenAI predicts that artificial intelligence beyond human level in all aspects will appear within ten A16B-2201-0856 years; Nvidia founder Jen-Hsun Huang has said that general artificial intelligence could surpass humans within five years.
Once general artificial intelligence is realized, it can be used to solve a variety of complex scientific problems, such as the search for aliens and habitable extraterrestrial galaxies, artificial nuclear fusion control, nano or superconducting material screening, and anti-cancer drug development. These problems often take human researchers decades to find new solutions, and the amount of research in some frontier areas is beyond human limits. General AI has almost unlimited time and energy in its own virtual world, which makes it a potential substitute for human researchers in some tasks that are easy to virtualize. However, at that time, how to supervise these artificial intelligence that exceeds human intelligence from the level of intelligence to ensure that it will not harm human beings is a problem worth thinking about.
Of course, we should not overestimate some of the statements of Silicon Valley giants, because in the history of the development of artificial intelligence, there have been three “AI winters”, and there are many examples of grand technical vision due to various constraints. But for now, it is certain that large-model technology still has a lot of room for improvement. In addition to GPT-4, Google’s “Gemini” and Anthropic Claude2 are currently second only to GPT-4, and domestic Baidu’s “Wenxin Word” and Ali’s “Tongyi thousand questions” are also the leaders in domestic large models. It will also be interesting to see if they release more revolutionary products in the New Year.