Everyone is enamoured with generative AI and state-of-the-art model releases, often overlooking that it’s the data foundation that will make or break your use case (& the relative investment you’ve made). In today’s column, I showcase a novel twist on the prompting of personas when using generative AI and large language models (LLMs). You conventionally enter a prompt describing the persona you want AI to pretend to be (it’s all just a computational simulation, not somehow sentience). Well, good news, you no longer need to concoct a persona depiction out of thin air.
• Automated writing tools might undercut opportunities for professional writers. • AI-generated text might reorganize or paraphrase existing content without offering unique insights or value. While these factors have worked well in traditional scenarios like criticism, parody or education, generative AI presents unique challenges that stretch these boundaries. Generative AI has been making headlines for it’s potential to revolutionise the way we think,work and solve problems, with McKinsey projecting it will contribute up to $4.4 trillion dollars to the global economy annually.
- Though the AI appears to often convincingly fake the nature of the person, it is all still a computational simulation.
- Sources suggested that an IP or specific project could involve creating and applying a set of distinct LoRAs, such as one for a specific character and another for the animation style.
- Generative AI models are trained on vast datasets, often containing copyrighted materials scraped from the internet, including books, articles, music and art.
- All you need to do is search the dataset to find what you are interested in as an AI persona.



