An introduction to AI (…and why you might avoid that term)

AI/ML
beginner
Published

July 1, 2024

Previous attendees have said…

  • 18 previous attendees have left feedback
  • 100% would recommend this session to a colleague
  • 100% said that this session was pitched correctly

Three random comments from previous attendees
  • Good introduction to AI, the development of it, current uses and shortcomings, and possibilities for the future
  • thought- stimulating and informative session
  • As a clinician who is interested in data, AI and so on, I learned a lot and enjoyed the whole session.
Session materials

Welcome

  • this session is 🌶: for beginners

Motive

  • There’s a lot of hype about AI at the moment (see this graph)
  • Underneath the hype, there’s a lot of genuinely exciting stuff going on too
  • That exciting stuff is likely to have some impact on health and care work
  • But the timing and nature of that impact is unclear

Three questions for you

What does AI mean to you?

HAL

Terminator

Pope in puffa jacket

Siri

Is AI…

  • Over-hyped?
  • Somewhere in between?
  • Neglected?
  • Other / don’t know

Do submarines swim?

About this talk

  • AI is hard
    • lots of different technologies
    • lots of new words
    • lots of promises and implications
  • So let’s start with a thought experiment

The Chinese room

Searle (1980)

“Suppose that I’m locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I’m not even confident that I could recognize Chinese writing”

However, he is supplied with a set of intelligible rules for manipulating these Chinese symbols

“火” is the opposite of “水” “六” is more than “四”

Question

Does this poor bloke locked in a room understand the Chinese symbols?

Now suppose that we start asking him questions (in English):

Is “六” more than “四”? If so, respond with “是”. Otherwise respond “不”

Question

  • Is understanding the same thing as being able to produce output in response to input?

  • Searle (1980) - this is the difference between strong and weak AI

Back to nice safe words

  • we usually don’t worry too much about what words like intelligence, understanding, etc really mean
  • for most purposes, understanding something, and doing that thing, pretty well overlap
  • AI, unfortunately, is an exception
  • big difference between producing output and understanding here

Why does this matter?

  • Because the current conversation around AI does violence to our usual understanding of basic terms (like intelligence)
    • We need to do a bit of re-interpreting…
    • …particularly because AI can do the input-output part really well
  • (side effect) The Chinese Room is an excellent way of understanding what’s going on inside some of the current tech

What are we talking about

  • AI = big umbrella term, problematic
    • understanding?
  • Let’s stick to some narrower concepts
    • Algorithms = rule-based ways of producing sensible output
    • Expert systems = more sophisticated expertise-based production of output
    • Machine learning = umbrella term for non-expertise-based production of output
    • Large Language Models = sub-species of machine learning

So what’s an algorithm?

  • Algorithm = rule (roughly)
    • if something happens, do something
  • made from expert input and evidence

An example algorithm

How about something more complicated?

Monitor rules from MYCIN
  • one problem with algorithms: how to handle conflicting information?
  • An expert system - MYCIN (Shortliffe and Buchanan 1975)
    • designed to identify bacterial infections and suitable Rx
    • 600 rules, supplied by experts
    • asks users a series of clinical questions
    • combines the answers using a (fairly simple) inference system
    • able to manage some conflicting information - unlike simpler algorithms

Machine learning

  • A next step: can we provide learning rules to a system, and let it figure out the details for itself?

https://commons.wikimedia.org/wiki/File:Supervised_machine_learning_in_a_nutshell.svg

This is supervised learning

A dataset downside

Producing labelled datasets is hard:

  • generally must be very large
  • generally requires expert classification
  • must be done with great accuracy
  • so dataset labelling is wildly expensive and thankless
    • Is there a way of doing something similar without spending millions classifying everything in the world by hand?

Unsupervised learning

English-language Google search results for large

Unsupervised learning

Google’s summary of where autocomplete predictions come from

Unsupervised learning

German-language Google search results for gross

Unsupervised learning

  • No-one is writing a list of possible searches starting with “Large…”
  • Nor are they classifying searches into likely/unlikely, then training a model
  • Instead, the model is looking at data (searches, language, location, trends) and calculating probabilities
  • The terminology gets confusing again at this point:
    • some describe this as deep learning
    • better to call this a language model

Large language models

What if we were more ambitious with the scope of our language model?

Transformer structure
  • Find masses of language data
    • chatGPT uses basically the whole web before September 2021
  • Build a model capable of finding patterns in that data
  • Allow the model to calculate probabilities based on those patterns
    • lots of work going on at present allowing models to improve in response to feedback etc

Large language models

  • superb at generating appropriate text, code, images, music…
  • but production vs understanding
    • e.g. hallucinations, phantom functions…
  • training is extremely computationally expensive
    • questions about inequality and regulatory moating
      • no-one but FAANG-sized companies can afford to do this
    • training is also surprisingly manual

Ethics

  • your web content, my model, my paycheque
  • where’s the consent here?
  • big serious worries about bias in some kinds of output
  • rights violations via AI
  • no settled questions around responsibility
  • UK GDPR etc assume data is identifiable. That’s not true in LLMs.

Punchline

  • On balance, while there’s hype here, there’s also lots of substance and interest
  • LLMs have become much better at producing plausible output, across a greatly expanded area
  • A strength: fantastic ways for those with expertise to work faster
  • A danger: LLMs are great at producing truth-like output. Good enough so that some will be tempted to use them to extend their apparent expertise…
  • But big serious legal and ethical trouble ahead - we’re not good at dealing with distributed responsibility

References

Aziz, Saira, Sajid Ahmed, and Mohamed-Slim Alouini. 2021. “ECG-Based Machine-Learning Algorithms for Heartbeat Classification.” Scientific Reports 11 (1). https://doi.org/10.1038/s41598-021-97118-5.
Mookiah, Muthu Rama Krishnan, U. Rajendra Acharya, Chua Kuang Chua, Choo Min Lim, E. Y. K. Ng, and Augustinus Laude. 2013. “Computer-Aided Diagnosis of Diabetic Retinopathy: A Review.” Computers in Biology and Medicine 43 (12): 2136–55. https://doi.org/10.1016/j.compbiomed.2013.10.007.
Searle, John R. 1980. “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3 (3): 417–24. https://doi.org/10.1017/s0140525x00005756.
Shortliffe, Edward H., and Bruce G. Buchanan. 1975. “A Model of Inexact Reasoning in Medicine.” Mathematical Biosciences 23 (3-4): 351–79. https://doi.org/10.1016/0025-5564(75)90047-4.
Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. “Attention Is All You Need.” https://doi.org/10.48550/ARXIV.1706.03762.
Winkler, Julia K., Katharina Sies, Christine Fink, Ferdinand Toberer, Alexander Enk, Mohamed S. Abassi, Tobias Fuchs, and Holger A. Haenssle. 2021. “Association Between Different Scale Bars in Dermoscopic Images and Diagnostic Performance of a Market-Approved Deep Learning Convolutional Neural Network for Melanoma Recognition.” European Journal of Cancer 145 (March): 146–54. https://doi.org/10.1016/j.ejca.2020.12.010.