This week we are debating modern AI systems, especially the commercial ones on just about everyone’s lips when talking about CVs, high school term papers, and interview answers.
Large Language Models (LLMs), of which ChatGPT and Bard are two examples, are growing in prominence, but will they disrupt the technology world, or are they nothing more than just another blockchain fizzle?
In this episode:
- Are these even actually “AI” models, or really just very fast processing of large data sets?
- What should I (and should I not) be putting into LLMs? How does the re-teaching based on data entered impact what you should put into public LLMs?
- What are some valid use cases for LLMs?
- Does depending on tools like LLMs (or calculators) bring us further from core understanding of how things work? Or should we be OK with the efficiency it brings?
- How does copyright fit into the LLM expectation and model, and does the legal licensing of training data dull the shine of LLMs?
- Are the analyses from LLMs skewed not only by the data they chose to use for training, but also by the userbase that uses that LLM?
- How are any of the “good practise” security and privacy requirements for LLM different from any other systems? Spoiler alert: not at all.
Unrelated to AI, we also talk about what happens to all the “smart” things in your house when the internet goes out? What stops working? Way more than you might think…
We also have a video channel on YouTube that airs the “with pictures” edition of the podcast. Please head to https://youtube.com/@greatsecuritydebate and watch, subscribe and “like” the episodes.
Some of the links in the show notes contain affiliate links that may earn a commission should you choose to make a purchase using these links. Using these links supports The Great Security Debate, so we appreciate it when you use them. We do not make our recommendations based on the availability or benefits of these affiliate links.
Thanks for listening!
Maybe not bankrupt, but has business problem: https://www.forbes.com/sites/lutzfinger/2023/08/18/is-openai-going-bankrupt-no-but-ai-models-dont-create-moats/?sh=3c8922845e22
Gartner declares LLMs at the peak of inflated expectations: https://www.gartner.com/en/newsroom/press-releases/2023-08-16-gartner-places-generative-ai-on-the-peak-of-inflated-expectations-on-the-2023-hype-cycle-for-emerging-technologies
When ChatGPT goes Bad: https://sloanreview.mit.edu/article/from-chatgpt-to-hackgpt-meeting-the-cybersecurity-threat-of-generative-ai/
The Circle (Movie): https://www.imdb.com/title/tt4287320/
Amazon Sidewalk, and it’s privacy issues: https://www.popsci.com/technology/amazon-sidewalks-privacy-concerns/
Idiocracy (Movie): https://www.imdb.com/title/tt0387808/
Moores law is dead: https://www.technologyreview.com/2016/05/13/245938/moores-law-is-dead-now-what/
GM deletes Car Play from future EVs: https://www.theverge.com/2023/4/4/23669523/gm-apple-carplay-android-auto-ev-restrict-access
GM announces $130K EV Escalade (without CarPlay): https://www.theverge.com/2023/8/10/23827059/gm-no-carplay-android-auto-escalade-iq
Fragile Things (Book): https://amzn.to/47BWWkB