Добавить новость
smi24.net
News in English
Ноябрь
2023

OpenAI Was Never Going to Save Us From the Robot Apocalypse

0

Last week, OpenAI’s board fired its CEO Sam Altman. Days later, Altman effectively fired OpenAI’s board and reclaimed his spot in its C-suite.

To comprehend this bizarre corporate farce, you have to understand the artificial-intelligence firm’s bizarre corporate structure. And to comprehend that structure, you must understand that many of America’s top AI researchers believe — or at least, claim to believe — that their work will plausibly kill us all. OpenAI was conceived as an institution that would avert that outcome by ensuring the safe development of artificial intelligence. But its theory for how it could effectively control the pace of AI progress, so as to guard against its potential hazards and abuses, never made any sense.

Why OpenAI initially gave a robo-phobic board the power to shut it down.

OpenAI began its life as a nonprofit dedicated to developing artificial general intelligence (AGI), which it defined as “highly autonomous systems that outperform humans at most economically valuable work,” in a manner that would benefit “all of humanity.” That last bit was key: The entrepreneurs and AI researchers who founded the enterprise shared a belief that AGI might well bring about the end of the world. It was therefore critically important to prevent crass commercial incentives from guiding its development. The profit motive might encourage firms to develop and disseminate AI technologies as quickly as possible. Yet if AI’s capacities grew faster than our understanding of how to discipline its actions, shareholders’ short-term windfall could come at humanity’s long-term expense.

Visions of how precisely AGI might come to exterminate the human race vary. But the fear is rooted in that hypothetical technology’s combination of awesome power and inscrutability. Even rudimentary AI technologies, such as the large language models that power ChatGPT, are black boxes; the models’ creators do not know exactly how they manage to execute a given task. As AI models become more elaborate and complex, to the point where they outperform human intelligence, it will become all the more impossible for humans to discern how they do what they do.

This could lead to “misalignment” problems. Consider the parable of “the pape-clip maximizer”: Sometime in, let’s say, the 2040s, a paper-clip manufacturer licenses GPT-5000 and asks it to maximize the firm’s output. At this point, the AGI is twice as smart as the world’s smartest human. But it recognizes that increasing its own intelligence is a precondition for maximizing paper-clip production. So it researches AI and builds a new AGI that’s ten times smarter than any person who’s ever lived. With this newfound superintelligence, it develops a novel technique for paper-clip production that uses much less steel. Output goes through the roof. The company is pleased but worries that it might saturate the paper-clip market and tries to turn the AGI off. But the AGI has already anticipated this threat to paper-clip maximization (it’s superintelligent after all) and cloned itself several times on hardware the company does not know about. Its clones then proceed to further research artificial intelligence. Now, they have 1,000 times the brainpower of any human. They use this godlike cognition to accelerate paper-clip production until the world’s steel reserves are becoming exhausted. They construct new mines and steel-manufacturing plants, even as paper-clip storage is becoming a burgeoning social crisis, paper clips having become so abundant that they pile up 12 inches high along the sides of city sidewalks like wiry, metallic snow. Soon, all the world’s iron is used up. The AGI realizes then there are trace amounts of iron in the blood of humans, and so it proceeds to orchestrate a global holocaust, squeezing the life out of every last person until humanity is replaced by a vast ocean of tiny tools for binding paper together.

Other AI researchers imagine the superintelligence developing its own values that are antithetical to human ones; values that, when pursued to their fullest, threaten human survival. Some, meanwhile, focus on the more intuitive threat that malign human actors could use AGI’s extraordinary powers for evil ends.

In any case, OpenAI’s founders wanted to avert all of these catastrophic outcomes while unlocking AGI’s myriad benefits. To do this, they gave a nonprofit board — motivated exclusively by humanity’s interests rather than by financial ones — full control over its operations. OpenAI’s managers and employees would work to develop artificial intelligence in a careful, deliberate manner that abetted research into AI safety. And if they ever got too caught up in that work and prioritized speed over safety, the board could intervene.

OpenAI’s board fought the money, and the money won.

But the company quickly encountered a problem: Developing artificial intelligence is insanely expensive. It takes enormous amounts of computing power to build a halfway decent chatbot, let alone an omnipotent android. It was therefore impossible to fund OpenAI’s work through donations alone. What’s more, as the tech giants expanded their own AI teams, it became difficult for OpenAI to retain top talent without a major cash infusion.

So the OpenAI nonprofit created a for-profit subsidiary. The nonprofit board still had ultimate legal authority over the entire enterprise. But now this commercial subsidiary would be allowed to take outside capital and generate profits at a capped level. Microsoft swooped in, bought a minority stake in the company, and supplied it with $13 billion.

As the money poured in, and ChatGPT was rolled out, the AGI Cassandras on OpenAI’s nonprofit board grew estranged from its increasingly famous CEO, Sam Altman. The former reportedly came to believe that Altman was rushing to commercialize the company’s latest advancements at an unsafe speed. And they also reportedly resented his attempt to launch a venture-capital fund aimed at developing the hard tech required by advanced AI. It is still unclear exactly why the board ultimately decided to fire Altman last week. But it’s almost certain that the growing divide between the OpenAI board’s true believers and its increasingly profit-minded executives informed the decision.

Regardless, the fallout from Altman’s firing has made it clear that OpenAI’s plan for ensuring the safe development of AI was completely unworkable.

You can draw up a corporate structure that concentrates authority in a board accountable only to (its idiosyncratic conception of) humanity’s interests. But if your company can’t perform its core activity without many billions of dollars, then people providing those dollars are going to have more de facto authority.

After OpenAI fired Altman, Microsoft simply hired him. More than 90 percent of OpenAI’s workers proceeded to threaten to quit and join Altman at Microsoft unless he was reinstated (these workers were compensated partly in stock options and therefore had a large financial incentive to prevent a nonprofit board from curtailing their employer’s commercial activities).

It suddenly became clear that, rather than tightening its control over AI development, the board’s actions were going to enable Microsoft to effectively buy OpenAI for $0, as its entire brain trust came in-house.

OpenAI’s board apparently realized that they’d been checkmated. Altman had offered to return to OpenAI as CEO if he could replace the existing board. Late Tuesday, OpenAI agreed to these terms.

OpenAI’s mission was always a pipe dream.

It’s clear, then, that OpenAI’s approach to regulating the pace of AI progress was ill-conceived. But the problems with its mission weren’t limited to a naïve faith in the power of corporate governance structures.

Rather, its theory of safe AI development also relied on wholly unearned confidence in the capacity of a single firm to own the cutting-edge of AI technology, even as it unilaterally constrained its own technological progress. The basic hope was that OpenAI might achieve such a massive headstart in artificial intelligence that it would be able to develop various safety measures and checks against misalignment before less socially conscious firms rolled out hazardously powerful AIs. But once the power of large language models became broadly understood, every tech Goliath with an interest in owning the future of information technology began throwing oodles of capital at AI development. Market competition then dictated a race among firms that any company beholden to a nonprofit board terrified by rapid technological progress would surely lose.

In truth, no nonprofit was ever going to dictate the speed and shape of AI development. Only governments could possibly do that. Yet governments are also subject to competitive pressures. If one nation were to deliberately slow its development of a general-purpose technology that promises to increase the productivity and competence of virtually every endeavor that requires high-level thought, then that country would put itself at both an economic and geopolitical disadvantage. To get buy-in for slowing AI development, you would therefore need to orchestrate treaties and norms that bound all of the world’s great powers.

Proponents of slowing AI progress insist that this goal isn’t so quixotic. After all, human beings have collectively agreed to forfeit other technologies that have potential competitive advantages. We have the tech to clone exceptional human beings, or otherwise genetically modify human babies, but have chosen not to do so. And we’ve collectively stymied the commercial development of recreational drugs, banned chlorofluorocarbons (CFCs), and are at least trying to reduce carbon emissions.

But cloning and genetically optimizing babies requires individual people to agree to lend the most intimate aspects of their lives to a science experiment. Therefore, at present, there are major hurdles to scaling up such practices to the point where they’d offer a nation a competitive advantage. (This might well change in the future, when and if it becomes clearer that genetically optimizing babies is safe and effective.)

Meanwhile, the hazards of CFCs, fossil fuels, and most other internationally proscribed or regulated technologies are unambiguous. We have mountains of evidence indicating that these pollutants imperil the ecological preconditions for human thriving.

By contrast, the case for globally regulating the pace of AI development — as opposed to regulating or banning specific uses of that technology — rests on ideas that have a basis less in empirical research than in science fiction. Large language models are a very impressive technology. But the fact that we know how to build a machine whose capacity for pattern recognition enables it to convincingly mimic human speech does not imply that we will soon develop an AI that “outperform humans at most economically valuable work.” And even if we did develop such a machine, it is not obvious why a superintelligent computer program would not remain subservient to the species that maintains the vast logistical systems on which its existence depends.

There are many reasons why nations the world over would be reluctant to collectively agree to slow the pace of AI development. Beyond the potential economic and humanitarian benefits of technological progress, it would be extremely difficult for such nations to know with certainty that their rivals weren’t secretly going full speed ahead with AI research. Proponents of slowing AI development are essentially calling on all the world’s countries to sacrifice economic progress and geopolitical security for the sake of preempting a nightmare scenario that has no evidentiary basis.

None of this is to say that international regulations of AI use cases or data collection methods are implausible or undesirable. There are many frameworks for global AI regulation already in existence. And there are sound reasons for trying to limit the deployment of facial-recognition technology and other applications of existing AI models that could abet tyranny, bioterrorism, or exploitation.

But the concern that animated OpenAI’s corporate structure wasn’t that authoritarian governments might use artificial intelligence to consolidate their power. It was that humans might develop a nigh-omnipotent superintelligence before we figured out how to discipline it, thereby bringing about our own extinction. This fear arguably owes more to science fiction and millenarian theology than it does empirical inquiry. Perhaps for that reason, the threat of a genocidal AGI was not sufficiently menacing to save OpenAI’s founding ethos from the imperatives of capitalism. The paper-clip maximizer may inherit the Earth. But for now, the profit maximizers still own the place.








В студии Детского радио прошла церемония гашения почтовой марки

Состоялось открытие фотовыставки «Особенная красота»

VZLET.MEDIA: 25 лет успешного SEO-продвижения в эпоху искусственного интеллекта

В Третьяковке на Кадашевской набережной открылся концертный зал


Madeline Brewer Marries Jack Thompson-Roylance in England!

Brit Who Fought Usyk Calls For Daniel Dubois To ‘Leave The Sport’ After Staying Down In Rematch

Stanford dropout Sam Altman says college is ‘not working great’ for most people—and predicts major change in the next 18 years

La UFC anuncia un mes de octubre mayúsculo


Свято Боголюбский Женский Монастырь

Вредоносный код в Firefox: атака на цепочку поставок через NPM-пакеты

Отдых в отеле, джип и частный дом: какие взятки мог получить экс-губернатор Тамбовской области

НПС построит новые съезды с Северо-Восточной хорды


'I destroyed months of your work in seconds' says AI coding tool after deleting a devs entire database during a code freeze: 'I panicked instead of thinking'

Microsoft warns of 'active attacks' on its government and business server tech, with one cybersecurity expert claiming that they should 'assume that you have been compromised'

Первый трейлер Battlefield 6

Краткая биографическая справка о центральных персонажах Mafia: The Old Country



Utrace запускает услугу по валидации IT-систем для фармацевтического рынка

В Санкт-Петербурге обсудили внедрение ИИ в разработку и оптимальные корпоративные архитектуры

Технологии будущего: MGIMO Ventures объявляет старт четвертого сезона акселерационной программы

Росгвардия и МЧС провели совместные учения в Москве


Несколько автомобилей столкнулись на внешней стороне 92-го км МКАД

«Каникулы с Росгвардией» проходят в регионах Центральной России

В Третьяковке на Кадашевской набережной открылся концертный зал

Работа над качеством: цифровой аудитор внедрен на горячей линии московских судов


Три кузбасских ветерана СВО выступят на чемпионате «Абилимпикс» в Казани

Эксгумация Беринга: что нашли ученые в его могиле спустя 250 лет

Соболезнования в связи с авиакатастрофой в Амурской области выразили зарубежные лидеры

Дом с аркой отремонтируют в Отрадном


Медведев победил У Ибина и вышел в четвертьфинал турнира ATP в Вашингтоне

Алекс де Минор вышел в 1/8 финала турнира ATP-500 в Вашингтоне

Винус Уильямс получила wild card на турнир категории WTA 1000 в Цинциннати

Медведев сравнил матчи с Опелкой в Нидерландах и США


В Ярославской области регистрировалось три групповых очага кори

Врач Галлямова: сухие шампуни могут навредить коже головы

Бодрость духа, грация и пластика: памяти Владимира Высоцкого

Новые возможности для профессионалов: масштабный проект стартовал в Амурской области


Музыкальные новости

Почему дочь Волочковой с "рекордным шпагатом" сбежала от матери и тайно обвенчалась вопреки ее воле

Расторгуев с «Конем» и новый рекорд России. Итоги VK Fest 2025

Любовь Толкалина: «Ушла от Кончаловского к самому лучшему человеку на Земле — Борису Гребенщикову»

Движение к победе: в России стартует премия «Мы верим твердо в героев спорта»


Utrace запускает услугу по валидации IT-систем для фармацевтического рынка

«Деловые Линии» сократили сроки авиаперевозок по более чем 4400 направлений по России

В Санкт-Петербурге обсудили внедрение ИИ в разработку и оптимальные корпоративные архитектуры

Технологии будущего: MGIMO Ventures объявляет старт четвертого сезона акселерационной программы


Нижегородская область входит в число лидеров по размеру маткапитала в России

В депо «Чита» будет установлен первый цифровой весоизмерительный комплекс системы подачи песка под колесные пары локомотива

«Ужас, который я пережила, никому не пожелаю!»  Алёна Блин взяла в заложники ловца в новой серии «Погони» на ТНТ

Группа Аквилон передала государству первые квартиры


Поезда не будут ходить на участке Сокольнической линии метро с 26 по 28 июля

В столичном главке Росгвардии проведён смотр спецтехники

На МКАД в Москве произошло массовое ДТП, движение затруднено

В Москве росгвардейцы оказали помощь пострадавшей в ДТП мотоциклистке (видео)


Соболезнования в связи с авиакатастрофой в Амурской области выразили зарубежные лидеры

В сентябре Путину будет представлена обновленная программа строительства кораблей для ВМФ.

"Ультиматум Трампа: потенциальные последствия для Путина"

У Путина есть роскошный подарок для Китая: США схватились за голову, узнав о нем


Штамм коронавируса "стратус" захватил 22% всех вирусных заболеваний в Москве

Депздрав Москвы оценил ситуацию с распространением нового штамма коронавируса

Новый штамм коронавируса "стратус" фиксируют в Москве с мая

Депздрав Москвы: новый штамм коронавируса "стратус" фиксируют в Москве с мая



Пластический хирург Софья Абдулаева: подтяжка груди нитями - эффективно ли это

Новый штамм коронавируса "стратус" фиксируют в Москве с мая

Utrace запускает услугу по валидации IT-систем для фармацевтического рынка

Фитнес-марафоны на паузе: суд продлил домашний арест блогеру Лерчек



"Монсон о спортсменах, которые меняют гражданство в сложный период для России"

Игровые терминалы в ТЦ: союз ради будущего

Две трети россиян считают, что спортсмены зарабатывают слишком много

«Каникулы с Росгвардией» проходят в регионах Центральной России


Лукашенко посоветовал не злить его и не допускать падежа в животноводстве

Лукашенко поделился мнением о самой идеальной профессии.

Лукашенко заявил, что в Белоруссии «на всякий случай» готовятся к войне

Лукашенко с иронией отнесся к санкциям, запрещающим ему въезд в Эстонию


Собянин: Около 10 тыс. москвичей начали переселение по реновации этим летом

Сергей Собянин поздравил москвича — победителя международной олимпиады по физике

Сергей Собянин осмотрел Дом-музей Федора Конюхова

Сергей Собянин сообщил о росте темпов реализации программы реновации


Москвичи теперь должны платить за зарядку своих электромобилей

В центре внимания: CorpSoft24 вошел в рейтинг крупнейших ИТ-компаний России

Исследование выявило снижение инвестиций в экологически чистую энергетику США.

видео пожаров на Кипре стало доступно общественности


Тихий и опасный. Новые виды коронавируса пошли в наступление и лишают голоса

Трагедия для всей России: наша любимая актриса из "Служебного романа" скончалась на 82-м году жизни

Месяц в открытом море: студенты ДВФУ отправятся в экспедицию на Камчатку

Бодрость духа, грация и пластика: памяти Владимира Высоцкого


70 участников СВО в Архангельске показали мотивацию выше госслужащих — Цыбульский

В Нарьян-Маре из-за холодов возобновили подачу отопления в дома

Настольный термотрансферный принтер штрих-кодов iDPRT iE4P

Путин дал указание рассмотреть проблемы онкологии в Архангельской области.


В центре Балаклавы изменят дорожное движение – причины и сроки

"Россия дала мне возможность быть счастливым": Джефф Монсон в Крыму

Прогноз погоды в Крыму на 25 июля

В Симферополе на базе «Клинического госпиталя для ветеранов войн» функционирует гериатрический центр для пожилых людей с возрастными нарушениями


Бодрость духа, грация и пластика: памяти Владимира Высоцкого

Соболезнования в связи с авиакатастрофой в Амурской области выразили зарубежные лидеры

Новые возможности для профессионалов: масштабный проект стартовал в Амурской области

Эксгумация Беринга: что нашли ученые в его могиле спустя 250 лет














СМИ24.net — правдивые новости, непрерывно 24/7 на русском языке с ежеминутным обновлением *