, ,

AI going fast forward?

AI going fast forward?

Photo Credit: Mark Winkler, UnSplash

In the January issue of Flag It Up, the Sheena Thomson Consulting monthly newsletter, we featured Chat GPT-3, which was taking the world by storm. For those still unaware, GPT is Generative Pre-trained Transformer.  Its release in November 2022 reportedly achieved record-breaking user numbers in a very short space of time – 175 million. In fact, its exponential growth has been at such a rate, servers have been groaning to cope. Everyone was using Chat GPT-3: even CEOs at Davos were allegedly using it to draft email replies.

But Chat GPT-3 is already obsolete just over three months after its release.  Chat GPT-4 was released in March 2023, with sign up numbers now off the scale.  This was closely followed by a number of other big-tech AI product releases, including from Microsoft and Google.  A full list of major releases can be found in the reference section at the end.

Over the last month, many of these new AI releases have been tested and their capability unleashed upon the world.  However, three news stories this week stand out for me which bring AI and its rapid and seemingly unstoppable progress into question.

The first news story was the Pope in the puffer jacket last weekend:

Photo Credit: Pablo Xavier via Midjourney

This AI-generated image fooled the world and exposed one of the biggest risks in AI, which we pointed out in January: the risk of spreading misinformation and disinformation. An interview with this image creator and the AI he used can be found in the reference section at the end.

The second news story was the publication of the UK Government White Paper which ruled out setting up a regulator for AI.  Instead, it called for what is reported as “light touch” regulations, acknowledging it will be very problematic to regulate.

The reaction to this white paper from both the regulators and legal sector has been broadly positive and seen as pro-innovation.  All acknowledge implementation of any regulation will need to be aligned with relevant international organisations.

The third news story was the open letter by the Future of Life Institute posing these questions:

“…we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.”

 It concluded with a very clear call to action:

“…we call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable and include all key actors.”

This call to action has been heard loud and clear, and it has been widely reported in the mainstream media that tech giants Elon Musk and Steve Wozniak are among the 1,377 (and counting) industry leaders who have signed this open letter.

It remains to be seen if this six-month pause will happen.  There are real practical complications around implementing this and monitoring compliance.  Furthermore, is six months enough? Not if we are talking about introducing safeguards and guardrails against all the risks.

One thing is for certain, the flurry of news stories this week related to AI development and its impact on society is incredibly fast-moving.  We cannot afford to ignore it.  Wherever there are opportunities, there are always risks and many dimensions to those risks.

Whether this is a human risk when using AI, or AI with unintended (or intended) nefarious motivation, we all need to take a second look at many things now. Often trusting our instincts is what works best, as well as keeping an eye on this news story theme and the impact of AI developments on our lives.

I will be taking a deeper dive into the opportunities and risks of using AI in my next blog. AI is a force for good, but as this week demonstrates, there needs to be some caution at the moment.

References and further reading

Buzzfeed:  We Spoke To The Guy Who Created The Viral AI Image Of The Pope That Fooled The World


IFL Science:  GPT-4 Hires And Manipulates Human Into Passing CAPTCHA Test

Future of Life Organisation:  Pause Giant AI Experiments: An Open Letter

UK Government:   UK unveils world leading approach to innovation in first artificial intelligence white paper to turbocharge growth

Sky News:  UK government to adopt 'light touch' regulations around AI as concrete legislation currently tricky

Leave a reply

Your email address will not be published. Required fields are marked *