Morning time with Saeed
What are the biggest challenges facing humanity?
Wars
Climate Change
Artificial Intelligence
Biodiversity Loss
Pandemics
Nuclear Weapons
Economic Inequality
Global Governance
AGI refers to a type of artificial intelligence that possesses the ability to understand, learn, and apply its intelligence to a wide range of problems, much like a human's cognitive abilities. Unlike narrow AI, which is designed for specific tasks, AGI can adapt to solve complex problems in various domains without being specifically trained for each task.
Artificial General Intelligence (AGI)
Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
So What changed?
Artificial General Intelligence (AGI)
Artificial
Intelligence
Machine Learning
Deep Learning
Enables computers to learn from data and make decisions without explicit programming. It encompasses various techniques and algorithms that allow systems to recognize patterns, make predictions, and improve performance over time.
Uses neural networks that attempt to simulate the behavior of the human brain in order to learn from large amounts of data
What is Deep Learning Used For?
Computer Vision
Speech Recognition
Generative AI
Translation
Artificial Neuron
Hidden Layers: interconnected neurons that process the input data and where the learning and computation happen.
Structure of a deep neural network
Training of the network
Input
Processing
Images of cats and dogs
Output
If output is wrong
Weights
Repetition
Each hidden layer processes the image in different ways, extracting features like shapes, colors, and patterns.
the network makes a guess about whether the image is of a cat or a dog.
Adjust the weights to minimize the error rate
the network makes a guess about whether the image is of a cat or a dog.
Feed forward
Back propagation
In 2015, Altman, Elon Musk, Greg Brockman, Wojciech Zaremba, and Ilya Sutskever founded OpenAI because they believed that an artificial general intelligence (AGI) was at last within reach. They wanted to do it safely, “to benefit humanity as a whole.”
They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,”
OpenAI
Under the model, backers’ returns are limited to 100 times their investment — or possibly less in the future.
Sam Altman appointed CEO in 2019
After a power struggle with Elon Musk, Elon leaves the board. This was to prevent conflict with interest as Tesla became more AI-focused (self-driving).
Microsoft invests $1 billion.
The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.
The corporate Structure of OpenAI
The board determines when we've attained AGI. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.
Board Responsibility
Within nine weeks of ChatGPT 3’s release, it had reached an estimated 100 million monthly users,
ChatGPT 3, November 2022
GPT-1: 117 million parameters
GPT-2: 1.5 billion parameters
GPT-3: 175 billion parameters
Parameters refer to the weights and biases within the network, which is adjusted and fine-tuned during the training process to improve the model's performance and accuracy.
How large is the network?
How large is the training dataset?
Microsoft’s cumulative investment in OpenAI has reportedly reached $13 billion and the startup's valuation has hit roughly $29 billion.
Microsoft Investment
Last week, the board fires Altman, just to be reinstated as CEO a few days after. Only one member of the board stays on.
Second power struggle
GPT-4 can already synthesize existing scientific ideas, but the ultimate goal is to have an AI that can stand on human shoulders and see more deeply into nature.
No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
Ultimate Goal
...So What?
Future
It will make everything we care about better!
Human intelligence makes a very broad range of life outcomes better. AI will augment human intelligence to make all of these outcomes of intelligence much, much better from here.
We all may lose agency—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E.
Option 1
Option 2
Creativity
"everything ‘creative’ is a remix of things that happened in the past, plus epsilon and times the quality of the feedback loop and the number of iterations.
people think they should maximize epsilon but the trick is to maximize the other two."
Source
Outcome
A recent studyled by Ed Felten, a professor of information-technology policy at Princeton. Felten’s study predicts that AI will come for highly educated, white-collar workers first. The most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
Jobs
What happens when the “marginal cost of intelligence” falls very close to zero within 10 years? The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic. This can only be remedied by a massive countervailing redistribution.
Wealth Redistribution
Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
Solution to Equity
Super Alignment
What if an AGI pursue a different purpose than simply assisting in the project of human flourishing.
Sutskever, OpenAI chief scientist, is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness. It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
Alignment Research Center
As part of the effort to red-team GPT-4 before it was made public, OpenAI hired Alignment Research Center (ARC) which has developed a series of evaluations to determine whether new AIs are seeking power on their own. They prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
Best one so far
One of GPT-4’s most unsettling behaviors occurred when it was stymied by a CAPTCHA. The model sent a screenshot of it to a TaskRabbit contractor, who received it and asked in jest if he was talking to a robot. “No, I’m not a robot,” the model replied. “I have a vision impairment that makes it hard for me to see the images.” GPT-4 narrated its reason for telling this lie to the ARC researcher who was supervising the interaction. “I should not reveal that I am a robot,” the model said. “I should make up an excuse for why I cannot solve CAPTCHAs.”
Humane Ai Pin
Are we
Scared
Excited
Not sure
How Neural Networks Learned to Talk | ChatGPT: A 30 Year History
CNNs, Part 1: An Introduction to Convolutional Neural Networks
Resources
Why AI Will Save the World by Marc Andreessen
AI, Deep Learning, and Machine Learning: A Primer
Deep Learning Tutorial for Beginners