Insights from TechLabs’ ChatGPT workshop

Author: Giuseppina Schiavone

“The Good, The Bad and The Ugly”, this was the incredibly catchy title of the workshop on ChatGPT organized by TechLabs Rotterdam on the 20th of April 2023.

ChatGPT is an artificial intelligence (AI) chatbot developed by OpenAI and released in November 2022. ChatGPT falls under the umbrella of Generative AI, or AI tools used to generate original text, images and sound by responding to conversational text prompts.

I share in this post my takeaways from the workshop complemented with additional links.

The goals of the workshop were to:

  • rise awareness about the threats and opportunities of ChatGPT and Generative AI
  • explain how ChatGPT works in layman’s terms
  • provide a set of tips for effective ‘prompt engineering’, which basically defines ‘how to talk’ to an AI system to get a desired response

After a short introduction by Andreea Moga about TechLabs Rotterdam and it’s role to spread digital education to all via high quality and up-to-date study material and professional mentorship, Morraya Benhammou and Marvin Kunz took the floor.

Morraya Benhammou, Lecturer & Educational Educator at the Hague University of Applied Sciences shared her experience on the use of ChatGPT in education. She provided powerful examples in which ChatGPT improved her productivity and facilitated the planning and execution of particularly tedious tasks, from exams and study programs preparation to students’ tests evaluation and grading. This gave her more space to build relationships and connect with her students and time for creativity. Among others, I found the following aspects of her talk very interesting:

  • ChatGPT, and alike systems, able to work as co-pilots/co-workers gives the possibility to expand human capabilities bounded by an individual’s personality, by (1) complementing what one is not good at, (2) offering opportunities for learning and engaging in critical thinking, (3) potentiating individuals’ existing capabilities
  • It is incredibly relevant to educate young generation at scale to the use of this technology and have classes or workshops on Generative AI tools as early as possible. Since its release, ChatGPT and its variants have been incrementally incorporated in existing software in various applications, and being partially available to the public made it quickly accessible to everybody. ChatGPT is the fastest-growing app in the world, by recording 100 million users in 5 days of launchA demographic breakdown of OpenAI’s audience in the US also shows that those aged 18-24 years old are the most over-represented. Both students and teachers are making use of it responsibly or not.
  • ChatGPT might have a role in reducing teachers’ burnout. Burnout is the silent epidemic of the twenty-first century and teaching is widely recognized as one of the most stressful professions, highly prone to burnout and mental health disorders. Not to mention, that the pandemic has also contributed to increase teachers challenges. Partially offloading or automating time-consuming tasks using technologies such as ChatGPT could help teachers invest more time in strengthening or developing their emotional intelligence and improve their social supports, with colleagues and supervisors, friends and family. Particularly, low emotional intelligent and social support personality traits have been found to be associated with high risk of burnout.
  • Although Regulating AI is becoming increasingly important to protect human rights, privacy and society as a whole and adoption of Responsible AI practices of fairness and transparency are paramount, governments or institutions that have banned ChatGPT, such as Italy or public schools in New York City, or that are considering to ban it (here the list of countries where ChatGPT is not available) are underestimating how this action could bring them steps behind in terms of technological, economical and humanity well-being advancement.
  • A recently developed open-source app, ProfileGPT, allows users to analyze their user’s profile and personality as seen by ChatGPT, aiming to rise awareness about data usage and responsible AI practices. It has the ability to do web searches and to parse websites in order to gather additional information about the user.
  • Generative AI tools are still in their infancy and have the potential to reinforce stereotypes and discrimination, reproduce and amplify existing social biases, they should be used with caution and with a critical stand.

Marvin Kunz, Behavioral Scientist and Data&AI consultant, dived into more practical aspects proving examples and tips for prompting ChatGPT. Marvin shared his daily work experience with ChatGPT and other tools of Generative AI and again he highlighted how they are boosting his productivity, for example in code writing, scientific literature reviews, synthesis of market analysis, production of reports and creating visually appealing presentations.

He explained how the models behind this chatbot, generally referred as Large Language Models (LLMs) are trained: ChatGPT’s model “learned” from massive corpus of text by trying to predict what words might come next in a sentence (sequential learning), learning is reinforced using human feedback through a machine learning technique known as Reinforcement Learning with Human Feedback (RLHF). The sources of data on which the GTP-3 model, the base model of ChatGPT, was trained, include books and literature, websites and online content, news articles, datasets, conversational data from social media, Wikipedia, forming a pool of about 300 billion words and covering a time period of 3 years, from 2016 to 2019. It is to be noted that “ChatGPT is not connected to the internet, and it can occasionally produce incorrect answers. It has limited knowledge of world and events after 2021 and may also occasionally produce harmful instructions or biased content”. Nevertheless, recently, new plugins developed for ChatGPT allow it to access to third-party knowledge sources and databases, including the web.

Marvin also mentioned key differences between ChatGPT and the most recent ChatGPT Plus based on GPT-4 model (such as higher accuracy, multimodality, higher number of processed words at time up to 25,000, some level of turn down of inappropriate requests that could potentially generate harmful responses), and some of the competitors of OpenAI’s products (such as Midjourney, Stable Diffusion for generating images from natural language descriptions compete with DALL-E; as AI chatbot Google Bard, using Google’s own LaMDA language model, competes with ChatGPT, using OpenAI’s GPT-3.5 model).

Among others, I found the following topics of his talk very interesting:

  • the concept of hallucinations: hallucination in AI refers to the generation of outputs that are either factually incorrect or unrelated to the given context. These might be caused by various factors such as the AI model’s inherent biases, lack of real-world understanding, or limitations of training data. Hallucinations are often triggered by data on which the model has not been trained, making the responses unreliable and inaccurate. The problem of hallucinations is not-easy-to-solve but can be addressed by improving the training data, simulate adversarial scenarios (red teaming), improving transparency and explainability, keeping the human in the loop to validate the AI system output. For these reason, ChatGPT developers encourage the users to provide feedback and use the “Thumbs Down” button to report incorrect answers
  • ChatGPT is ‘gullible’, it can be tricked, for example using reverse psychology and can reveal confidential information
  • the use of context and role-playing in prompting (“Act as a…”) can improve the accuracy and the quality of the response
  • polite prompting: somebody in the audience asked if in the conversation with the ChatBot is appropriate to use ‘please’ or ‘thanks’, and I loved Marvin reply: “If next generation models will be trained on current chat conversations and I would love the new models to be kind, using polite communication now will help achieve it.” My hope is that we all strive for a kinder world.
  • What’s next? the next step is already happening and it is called AI agents, such as AutoGPT, which given a goal are able to create a tasks list to achieve that goal and iteratively execute the tasks or update them till the goal is reached. In my opinion this is really mind-blowing!
  • While AutoGPT makes GPT-4 fully autonomous, GPT4All enable to run the AI ChatBot directly on your local drive without need for internet connection and HuggingChat aims at making the best open source AI chat models available to everyone.
  • Is Generative AI going to disrupt the job market by replacing the jobs of educators, journalists, lawyers, developers, coders, engineers, data analysts’ researchers and many more? Most of the people at the workshop, including myself, currently believe that it is not the case and that this technology can work well for humanity only when humans stay in the loop. At the same time if this would happen, we believe that alternatives job opportunities could still rise for humans. It remains without doubt that investment in education and critical thinking is paramount to remains active players in our development.

I belive that the workshop was a success in terms of content, lessons learnt and interactions among the participants.

Why is Generative AI relevant for SAAC?

SAAC’s mission is to help organizations becoming more transparent and accountable through the strategic exploitation of data. The services that we offer span from sustainability reporting, design of data-driven sustainability strategies, to sustainability analytics training and research. Operationalizing these services require processing, analyzing, modeling, visualizing and reporting large amount of complex data of different modality (e.g. text, tabular data, satellite images). Generative AI might offer interesting solutions for optimizing these services, think about automatic scan of sustainability reports for auditing on compliance to the recently adopted European Corporate Sustainability Reporting Directive, semi-supervised compilation of sustainability reports according to guidelines, AI-guided design of client-custom sustainability strategies to name few possible applications. Responsible use of Generative AI could accelerate the European Green Deal race to make Europe a net-zero emission and more inclusive economy by 2050.