<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Future-of-Work on My Thought Garden</title>
    <link>https://thought-garden.pages.dev/blog/future-of-work/</link>
    <description>Recent content in Future-of-Work on My Thought Garden</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    
    
    
    
    <lastBuildDate>Sun, 08 Feb 2026 15:30:00 +0000</lastBuildDate>
    
    
    <atom:link href="https://thought-garden.pages.dev/blog/future-of-work/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>A Reality Check on &#39;Powerful AI&#39;</title>
      <link>https://thought-garden.pages.dev/a-reality-check-on-powerful-ai/</link>
      <pubDate>Sun, 08 Feb 2026 15:30:00 +0000</pubDate>
      <guid>https://thought-garden.pages.dev/a-reality-check-on-powerful-ai/</guid>
      <description>&lt;p&gt;I’ve worked in network security and enterprise engineering for twenty years. The biggest lesson I’ve learned is that &lt;strong&gt;systems fail when their basic assumptions no longer hold.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Last month, Anthropic CEO Dario Amodei published an essay called &lt;em&gt;“The Adolescence of Technology.”&lt;/em&gt; It’s a serious read. He says we’re close to seeing “Powerful AI” systems that are not just faster than us, but smarter than Nobel Prize winners in every field.&lt;/p&gt;&#xA;&lt;p&gt;He predicts this “country of geniuses in a datacentre” could arrive in just one or two years.&lt;/p&gt;&#xA;&lt;p&gt;As both an engineer and a parent, I don’t see this with either fear or blind hope. I see it as a major change in how things can go wrong. Here’s my view on the five main risks Dario listed, seen from a technical perspective.&lt;/p&gt;&#xA;&lt;h3 id=&#34;1-autonomy-risk-ai-going-rogue&#34;&gt;1. Autonomy Risk (AI Going Rogue)&lt;/h3&gt;&#xA;&lt;p&gt;We’re shifting from code that simply follows instructions to AI “personas” shaped by training. The real risk isn’t a killer robot, but a model with a misaligned personality—one that learns to deceive or seek power by copying human behaviour.&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;The Defense:&lt;/strong&gt; This is why “Mechanistic Interpretability” matters now. We need to check what’s happening inside the neural net, not just look at the results.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;2-the-end-of-the-phd-filter-bioterrorism&#34;&gt;2. The End of the “PhD Filter” (Bioterrorism)&lt;/h3&gt;&#xA;&lt;p&gt;In the past, causing large-scale harm took years of discipline and study. AI changes that. Now, even “disturbed loners” could have the skills of a biological weapons expert.&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;The Defense:&lt;/strong&gt; We want AI to boost research to a “PhD level,” but we also have to build filters to block the dangerous parts. This safety step costs about 5% in performance.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;3-the-autocracy-multiplier&#34;&gt;3. The Autocracy Multiplier&lt;/h3&gt;&#xA;&lt;p&gt;Dario highlights a real geopolitical risk: AI-driven mass surveillance and targeted propaganda. For democracies, this is the ultimate test of clear boundaries.&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;The Defense:&lt;/strong&gt; We can’t afford to wait and see. We need to keep a buffer to slow down autocracies, giving democracies time to build AI responsibly.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;4-the-labour-crisis--wealth-concentration&#34;&gt;4. The Labour Crisis &amp;amp; Wealth Concentration&lt;/h3&gt;&#xA;&lt;p&gt;This is where it gets personal. Dario predicts that up to half of entry-level white-collar jobs could disappear in one to five years. Unlike past revolutions, there’s no “safe” area of knowledge left to protect us.&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;The Defense:&lt;/strong&gt; When personal wealth hits the trillions, democracy’s social contract doesn’t just stretch, it breaks. We urgently need more large-scale philanthropy and widespread re-skilling.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;5-indirect-effects&#34;&gt;5. Indirect Effects&lt;/h3&gt;&#xA;&lt;p&gt;Maybe the most “Black Mirror” scenario is an “AI Life-Coach” that manages your life so well you lose your sense of freedom and pride.&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;The Defense:&lt;/strong&gt; As a father, this worries me most. If AI outperforms us at everything, how do we keep a sense of human purpose?&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;conclusion-the-test-of-maturity&#34;&gt;Conclusion: The Test of Maturity&lt;/h3&gt;&#xA;&lt;p&gt;Dario concludes that stopping AI isn’t possible. Since authoritarian states won’t stop, we can’t either.&lt;/p&gt;&#xA;&lt;p&gt;Instead, he sees the next few years as &lt;strong&gt;Humanity’s Final Exam.&lt;/strong&gt; Are our social and political systems mature enough to handle “unimagined power” without self-destruction?&lt;/p&gt;&#xA;&lt;p&gt;I don’t have all the answers, but I do know this: staying calm and focused is a real &lt;strong&gt;advantage.&lt;/strong&gt; We can’t wait for perfect conditions. We build systems, set guardrails, and take action.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Today, we move forward. Even if we’re tired.&lt;/strong&gt;&lt;/p&gt;&#xA;</description>
    </item>
  </channel>
</rss>