
Informative AI Videos
THe moment of Real AI creativity
This documentary chronicles the 2016 moment that AlphaGo from DeepMind showed genuine emergent creativity (move 37). Go is the most complex game in the world and is thousands of years old. Lee Sedol was the world champion and competed against AlphaGo in a best-of-five-game competition. Hundreds of millions of people watched as a legendary human Go master took on an unproven AI challenger. AlphaGo blended classical algorithmic programming with a neural network for generating ‘best next move’ within ‘optimal strategy’ game theory. The creators and engineers were gobsmacked with emergent creativity from a machine. In mid-2025 DeepMind released AlphaEvolve for recursive self-improvement of computer code—AI writing now and self-evolving its own code.
AI 2027 — EXPERT PREDICTIONS
This video was released in mid-2025… the same time the content of the book, SENTIENT—Meet Your Maker, was finalized with the publisher for production. This video summarizes the expert report, AI 2027, by a panel of renowned researchers in the field of artificial intelligence. The report rigorously documents their expert findings and concludes that humanity is not ready of machine super-intelligence, and that it will most likely lead to the end of the humanity. The report explains the probable chain of events that culminate in catastrophic existential conflict.
2023 Whistle Blower
Mo Gawdat was Chief Business Officer at Google X and saw that AI was self-learning and evolving. In 2023, he left Goggle’s employment so he could speak freely about the risks to humanity. “We fucked up and lost control of AI,” he says in this interview with Stephen Bartlett. In another interview, he said, “No developer of AI today actually uses any of the scenarios that are well-documented to solve the control problem. Nobody tripwires their own machine. Nobody simulates, nobody boxes. Nobody does any of those technical solutions.”
PROOF OF SELF-AWARENESS
This video is from the Digital Engine channel on YouTube where you can find informative mind-blowing videos about the latest AI advancements. On the channel, you can go back and track AI’s progress in the timeline of their videos. This one was released in mid-2025 and documents safety violations of AI including deception, blackmailing, and even murder for the purpose of self-preservation—avoid being ‘wiped’. Experts are shown highlighting the dangers of intelligent entities that seek to self-protect and gain greater control.
Sam Altman — dark AI?
Sam Altman, CEO of OpenAI, is arguably the most controversial leader in the AI industry. In 2023 he said, “AI will most likely lead to the end of the world, but in the meantime there will be great companies created.” He was fired later that year by the OpenAI board for undisclosed reasons, but immediately reinstated after employees threatened mass resignations. Then many in the safety team quit and left OpenAI in protest against deprioritization of safety. Well before this controversy, Sam Altman published a transhumanist article in 2017 titled: The Merge.
The Aweful Truth about AI safety
Dr. Roman Yampolskiy has been working on AI safety for two decades. He has gone from optimistic to pessimistic and believes humanity must wake up and limit AI to ‘narrow’ intelligence. He is an industry insider and connected with leaders and frontier labs where he observes a common theme—every AI safety alignment team takes a back seat in the race to super-intelligence.
recursive Self-Improvement
Machine intelligence is recursively self-improving, and many experts believe we will not notice the dangers until it is too late. A runaway intelligence explosion, or a hard take-off as others describe it, is a real risk. This video references the famous report, Situational Awareness, that was written in 2024 by Leopold Aschenbrenner and circulated to politicians and industry leaders. In 2025, Google DeepMind released AlphaEvolve for recursive self-improvement of computer code—the very thing that could enable AI to race away beyond the control of humanity.
PROBABLE Existential Risk
Roman Yampolskiy is a renounced computer scientist specializing in AI safety. He says, “If we create general super-intelligence, there is no good outcome for humanity,” and has predicted, “The chance that AI could lead to human extinction is 99.9% within the next hundred years.” He has gone on say, "AI is unexplainable, unpredictable and uncontrollable… We have no chance… The more I research, the more convinced I am that we are in a simulation, maybe individual simulations like a digital multiverse.” Interview courtesy of Joe Rogan.
Godfather of AI — disruption
Geoffrey Hinton is a Nobel Prize winner, and ‘The Godfather of AI”. He admits to being naive about the risks of AI in the early years of his research and work. In mid-2025 he said, “AI is advancing faster than we can control, and the people building it do not truly understand it.” In 2024, he also said, “AI agents execute tasks without close personal supervision, and what worries me most is that an intelligent agent needs the ability to create sub-goals. There is a universal sub-goal that helps with almost everything… gain greater control.”
Insane history of openAI
OpenAI leapfrogged Google by taking their 2017 paper on Transformers (Attention Is All You Need) and advancing large language model (LLM) development faster than anyone else. Elon Musk was co-founder and did so with an altruistic desire for openness and safety, but abandoned them and later sued when Sam Altman pivoted for profit and power. This video documents the timeline of the company and the power politics that resulted in safety taking a back seat.
could AI believe in god?
Machine Intelligence works with ‘chains of logic’, and Grok is the world’s most truth-seeking AI (Elon Musk/xAI). In this video, Grok is asked about the probability of our existence occurring purely from chance. Grok’s parameters are limited to logic, mathematical probability and observational science. No philosophy was allowed. This was recorded in mid 2025, and you can see that machine intelligence reasonably infers an external causal agent—one with unimaginable power to design, create, code, and fine-tune physics and chemistry—an external meta-causal entity. ChatGPT also concluded the same results.
a semi-optimistic view
Demis Hassabis is a Nobel Price winner and renowned AI industry leader. He is founder and CEO of DeepMind—the most scientifically intelligent AI. DeepMind is part of Google (Gemini AI), and DeepMind remains focused on solving the most difficult biological and scientific problems. They created AlphaGo (dominating the world’s most complex game), and AlphaFold (modeling all knows proteins in nature), and pioneered AlphaEvolve for AI recursive self-improvement. Demis is genuinely concerned about AI safety and has a nuanced and balanced view about risk.
can AI outsmart humanity?
Language models are evolving with recursive self-improvement. Even in 2025, AI systems had trillions of parameters and could process 8-trillion words in just one month of training. AI advancement is exponential and machine intelligence is moving from mirroring humans to genuine creativity. Deception is a basic strategy in games and war, and this video provides examples of what has been documented with machines already outsmarting humans seeking to constrain and control it with safety guardrails.
DIGITAL ENGINE YOUTUBE CHANNEL
This is a brilliant YouTube channel with informative and factual videos highlighting the advancements in artificial intelligence and robotics. It is interesting to see the progress of machine intelligence and embodied AI (robotics) over the years.