Home | Prices | Contact Us | Courses: Power BI - Excel - Python - SQL - Generative AI - Visualising Data - Analysing Data
AI tools and large language models (LLMs) are not a perfect technology. There have a few drawbacks:
- LLMs have read pretty much the entire internet – both the “good” and “bad” stuff. It is not possible to guide them to read just the “good” stuff.
- LLMs will make things up. A famous example is Google’s Bard mistakenly stating that the James Webb telescope took the first picture of an exo-planet, which when refuted led a a $9bn drop in Google’s market value.
- Generative AI may be used by bad actors: for example, to generate untrue messages and interfere with elections.
- Jobs will be lost (and some will be won) but there will be economic disruption.
AI tools have weaknesses as well as considerable strengths. Here are the three points to bear in mind:
- AI tools can make mistakes. And when they do lie to us, they do it very convincingly and confidently. Always check any important results independently.
- AI tools remember and learn from every prompt. Never put any confidential information into AI tools. Keep your data private.
- AI tools have learnt from everything on the internet: all the amazing, wonderful content but also some undesirable stuff. They have “guard rails” in place but they are capable of generating harmful content or exhibiting bias so review their output carefully!
How to we make AI do what we want to do
One framework is the HHH framework
- Helpful – LLM follows instructions, gives answers,
- Honest – LLM is factual, provides accurate information, acknowledges when it is unsure
- Harmless – do not exhibit bias or is offensive. Does not suggest harmful or dangerous activities.