It is crucial now more than ever for developers to place a high priority on the responsible and ethical usage of Generative AI technology as it grows more pervasive in our daily lives. Although using AI, including generative models like the ChatGPT API, has many potential advantages, there are also concerns that need to be carefully considered.
Till date there are now more than 100 free chat applications promoting themselves with “ChatGPT” in the application name, majority if not all of them are monetising the apps through advertisements. Then there is also a host of new apps that leverage the chat engine for all kinds of productivity and value-added services
To what extent can we trust these applications?
Can we trust AI models and apps?
There are several ways in which language models, including the ChatGPT API, might be misused. For instance, they can be used to create false information, disseminate false information, or create offensive or inappropriate content. In fact these models might also accidentally produce false or inaccurate information based on biased data. This emphasises the significance of encouraging ethical and responsible use of these technologies and putting in place suitable measures to secure users and avoid abuse.
Additionally, by producing text that is intended to sell goods or services, ChatGPT and other AI language models may be used to also generate advertising revenues. For instance, a business may utilise the ChatGPT API to develop a chatbot that interacts with clients and makes product recommendations based on their interests and requirements. While employing Generative AI for advertising can be a useful tool for firms to reach their target markets, it is vital to weigh the potential hazards and ethical issues.
Developers of Generative AI should adhere to ethical standards and best practices while utilising and monetising generative AI to ensure user privacy and data protection.
Look at data through an ethical lens and learn how to manage large streams of data by taking our Data Ethics and AI Governance Frameworks course.
Be transparent about data collection: It's crucial to be open about the user data that is being collected, how it is utilised, and with whom it is shared when utilising AI for advertising. This includes abiding by all relevant data privacy rules and regulations, such as the Personal Data Protection Act (PDPA) General Data Protection Regulation (GDPR).
Control access to the API: OpenAI limits access to the ChatGPT API to approved users and organisations, in order to prevent misuse of the technology. Developers should also ensure that only authorised individuals and organisations have access to the API, and should take appropriate measures to secure their API keys and prevent unauthorised access.
Filter and moderate content: Use ChatGPT API built-in filters that can be used to prevent the generation of inappropriate or offensive content. Developers should make use of these filters, and may also want to implement additional moderation tools available in the market to ensure that any content generated by the API is appropriate and meets market standards.
Use the API for socially beneficial purposes: Use the ChatGPT API for socially beneficial purposes, such as improving access to information, enhancing education, or supporting mental health which creates value for the users.
Respect for user privacy: When utilising AI for advertising, developers should respect user privacy and only gather and utilise the data required to deliver the service. This entails getting the user's consent before collecting and using their data, as well as putting in place the necessary security measures to protect it.
Consider harm and risk to users: Developers should consider the risk to human life, especially when it involves decision-making. If the output is likely to affect the user’s well-being or another life, it should not be used without caution and a comprehensive risk assessment should be done. If the output is considered to be as good as an average human, AND does not affect human life or the well-being of the individual, then it should be reasonable to use under guidance that the output is artificially generated.
Avoid creating offensive or improper information: Developers should make sure that any content produced by AI for advertising reasons is appropriate, ethical, and adheres to their standards.
Offer value to consumers: When employing AI for advertising, developers should put a priority on giving value to users and refrain from any potential abuses of the technology. Avoiding overly pushy or deceptive advertising techniques and making sure the user experience is positive are two examples of what this entails.
In addition to following these best practices, developers should be aware of the potential dangers and abuses related to the use of AI in advertising and take the necessary precautions to avoid them, especially when using software development kits and libraries from AdTech platform providers.
This entails being transparent about the use of AI in advertising and informing users in a straightforward manner about how their data is being used.
AI governance rules and regulations that include these moral tenets and best practices are probably going to arise as the use of AI expands and changes. This includes the proposed Artificial Intelligence Act by the European Union, executed to be passed this year, which aims to create a legal framework for the creation and application of AI and contains clauses to guarantee responsibility, transparency, and non-discrimination in its application.
Using AI for advertising should just be disclosed openly, and users should receive clear information about how their data is being used. This entails protecting users against any risks brought on by generative models like ChatGPT and putting in place the necessary controls to guarantee that the technology is applied responsibly and ethically. Users need to know that they are communicating with a chatbot in a conversation.
For access to news updates, blog articles, videos, events and free resources, please register for a complimentary DPEX Network community membership, and log in at dpexnetwork.org.
To summarise, developers have a lot of room to innovate and construct compelling applications using AI, including the ChatGPT API.
To ensure that user privacy and data protection are addressed, developers must prioritise responsible and ethical use of the technology and adhere to best practices and ethical principles. Start being aware of principles such as transparency and explainability, fairness, human control, etc. Also, when utilising AI for advertising, developers should be aware of the potential hazards and ethical issues involved and take the necessary precautions to stop any misuse of the technology.
No matter how busy the developers may be, they should ensure that their use of AI is responsible and ethical by being transparent, carefully reading the privacy policies and terms of service of any SDK they use, and adhering to best practices for data protection and security.
For data privacy or governance teams, are you aware of what’s happening behind the scenes? Is there an operational framework in place to govern the use of AI?
These efforts, in turn, will help to promote a more responsible and ethical AI ecosystem, and contribute to the continued growth and innovation of AI applications that benefit society as a whole.
Kevin Shepherdson is the author of “99 Privacy Breaches to be Aware of”. He is the CEO and Founder of Straits Interactive.
Get access to news, enforcement cases, events, and actionable tips and guides
Get regular email updates and offers
Job opportunities, mentorship and career guidance
Exclusive access to Data Protection community - ask questions, network and share knowledge with peers and experts via WhatsApp and Linkedin
DPEX Network is a Community Initiative of Straits Interactive.
Copyright © Straits Interactive Pte Ltd. All Rights Reserved.
All intellectual property rights to logos and brands featured on this website remain the property of their respective owners.