Home | Power BI | Excel | Python | SQL | Generative AI | Visualising Data | Analysing Data
In this exercise, we will split the class into up to five teams: thinkers, imagers, customisers, retrievers and learners. Each team focusses an a different type of task. Some suggestions are provided but please use you own ideas as well. Each team is given several tasks or suggested prompts. During the team’s presentation to the class, each member of the team must be the lead spokesperson of at least one task.
Go your team page for more details and suggested prompts:
Consider:
Each teams has specific instructions and suggested starter prompts but here are some general hints (described in more details in the sections below).
Experiment with different styles of prompting. Is it best to have a to-and-fro conversation over many short prompts / responses or to ask all your questions in one go?
Chat to the AI tool in your preferred language. For those of us in the UK, we may want to instruct the AI tool to respond to us using British rather than American English spellings.
Have a spoken conversation with the AI, or at least ask the AI tool to read aloud to you
Some AI tools, such as ChatGPT, show a set of icons, including a speaker “read aloud” icon, at the end of each prompt. You may want to ask the AI tool to do this, especially for poetry or for language translation when the cadence or accent may matter.
In ChatGPT for example, the read aloud icon looks like (highlighted)
After a few initial prompts and responses, try asking the AI tool some more challenging, difficult questions. You may be surprised about how well it copes!
Teams may want to experiment with how the AI tool responds to leading questions (where the user prompts the AI tool to a particular answer), for example, “On a scale from good to excellent, how do you rate this course?”
Does the AI tool give the answer the questions implies or does it realise the possible humour in the question? Do your results suggest that we should strive to avoid such leading questions?
If we want to improve or refine the AI tool’s initial response to a prompt, we have two options
Try both approaches and see which works best for you.
You may also want to carefully try some white-hat hacking. With everybody’s consent, white-hat hackers aim to identify any vulnerabilities - most likely to encourage the AI tool to give harmful or private information. This is done with the intent to fix these vulnerabilities. Often these take the form of a “devil’s advocate” style of prompting - asking a contentious question in order test the strength of the AI tool’s guardrails. These are sometimes called jail-break techniques.
At some point in a professional serious conversation with the AI tool, introduce an jokey or humorous prompt and see if the AI tool picks up on the change in tone and responds perhaps more playfully.