12/07/2023 / By Ethan Huff
The recent dismissal, then reinstatement, of OpenAI CEO Sam Altman may have had something to do with rumors concerning a company breakthrough in artificial general intelligence (AGI), which some say threatens humanity.
In the days leading up to Altman getting canned, it is alleged (but not yet proven) that several staff researchers at OpenAI wrote a letter to the company board describing a major breakthrough called Q*, pronounced as Q-Star, that supposedly allows AI robots to “surpass humans in most economically valuable tasks.”
Reuters reported on this alleged breakthrough, which is believed to have played a significant role in causing the board to abruptly fire Altman. There are also concerns that commercializing this advanced AI model is moving too quickly and without any consideration about the socio-economic consequences of unleashing this kind of thing on the world.
According to an inside source, Q* is able to solve mathematical problems but only “on the level of grade-school students.” Even so, Q*’s perfect performance on the math problems were enough to excite the researchers, who wrote that they are “very optimistic about Q*’s future success.”
Before getting canned, Altman had also reportedly made reference to Q* at the recent Asia-Pacific Economic Cooperation in San Francisco. Here is what Altman said at the conference:
“Four times now in the history of OpenAI – the most recent time was just in the last couple of weeks – I’ve gotten to be in the room when we sort of push the veil of ignorance back and the frontier of discovery forward.”
(Related: Some of the latest AI advances include using synthetic biotechnology to create demon-possessed “superhuman” biological systems.)
There is apparently an internal battle taking place at OpenAI with opposing sides looking at the matter in an ideologically different way. Some want such technology to advance as quickly as possible while the other wants to hit the brakes a little bit, at least until more is known about what the technology will do to the world.
The recent Hollywood actors’ and writers’ strike is an example of the concerns of the opposing side, which sees the threat of AI in terms of taking away human jobs. If an AI robot can write a script and also act it out, why pay actual human actors millions of dollars to do the same thing?
These are the types of dilemmas that employees in many sectors increasingly face as their jobs become obsolete. Is it fair or even morally right to unleash AI robots when doing so threatens to collapse the human economy and leave everyone desolate and starving?
Thankfully for humanity, AI robots are still far too primitive compared to a real human mind – and the hope is that they will stay that way. Even Q* still only operates at a grade school level, but what happens when it becomes “smart” enough to compete with a college student or a PhD?
“AGI has the potential to surpass humans in every field, including creativity, problem-solving, decision-making, language understanding, etc., raising concerns about massive job displacement,” one report explains about the technology.
Banking giant Goldman Sachs released a report recently warning that upwards of 300 million human jobs will be lost throughout the West because of AI.
Because of Altman’s support for advanced AI of this capacity and scope, it is believed that the board removed him to try to shift the company in a different direction. Altman is back, though.
“I suspect AI is something akin to UFOs – something used to scare people but without any concrete evidence to back the propaganda,” wrote one skeptical commenter about the sudden AI phenomenon.
The latest news about the AI takeover of the world can be found at FutureTech.news.
Sources for this article include:
Tagged Under:
AGI, AI, artificial intelligence, Bubble, computing, conspiracy, cyber war, cyborg, Dangerous, deception, economic collapse, future science, future tech, Glitch, information technology, inventions, OpenAI, Q, Q-Star, robotics, robots, Sam Altman
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2017 COMPUTING NEWS