ProArch Blogs

Generative AI: What to Start, Stop, and Continue

Written by ProArch | Sep 12, 2023 3:49:57 PM

As generative AI continues to weave its way into the business fabric, it brings along both immense potential and intricate challenges. While the use of generative AI for content creation has gained traction, its deployment for more complex business problem-solving requires a deeper understanding and a more meticulous approach. To navigate this evolving landscape, this blog covers what actions you need to start, stop, and continue before getting in too deep.

 

Start

The very first thing to start doing is creating and publishing a written AI Policy. This ensures that your employees understand the acceptable use of AI technology with clear guidance in a published HR policy. You can use this AI policy template to get started. An AI policy sets boundaries and expectations, ensuring that the capabilities of AI are harnessed within ethical and operational confines.

The next thing you should start doing is consider the ethical implications before deploying or even training generative AI. This includes understanding potential biases in the data, the potential for misuse, and the broader societal impacts of the technology. Make sure to have safeguards in place against unintended consequences.

Last, but not least, once generative AI is in use it's essential to start monitoring its outputs regularly. Constant scrutiny ensures that any deviations, anomalies, or undesired outcomes are detected at their nascent stages, allowing for timely interventions and refinements.

 

Stop

An over-reliance on raw data for training AI models is a big no-no. Data, in its raw form, might be riddled with biases or inaccuracies. It's essential to not view this data with unwavering trust but to invest time in curating, cleaning, and truly understanding its intricacies.

Next, in the quest for perfection, organizations can not overlook feedback. Feedback, be it positive or critical, offers invaluable insights. Disregarding it is like disregarding a valuable source of opportunities for improvement.

Don’t get caught up in big promises without doing your homework. It's essential to challenge the allure of universal solutions. While generative AI models, represented by the likes of GPT and DALL·E, are formidable, assuming them to be universally apt without any customization can lead to suboptimal results.

 

Continue

As businesses continue their journey with generative AI, certain practices should remain constant. An iterative approach to AI development is paramount. This includes regular retraining of models in light of new data, feedback, and technological breakthroughs.

Transparency, too, is non-negotiable. Being forthright about how the AI operates, its inherent limitations, and any biases it might possess goes a long way in fostering trust.

Lastly, the vast and dynamic realm of AI is not one to be navigated in isolation. Collaborative engagements with the broader AI community can immensely benefit businesses. Sharing findings, imbibing new techniques, and learning from the experiences of others can significantly elevate an organization's AI endeavors. Plus, continue to have an open mind to exploring capabilities of new tools like Microsoft 365 Copilot.

 

The integration of generative AI into business operations offers a lot of promise. However, its potential is not without pitfalls, and history offers cautionary tales. For instance, in 2016, Microsoft released Tay, a Twitter-based chatbot designed to learn from its interactions with users. Unfortunately, within 24 hours, malicious users took advantage of its learning mechanism, turning Tay into a purveyor of racist and controversial statements. This incident highlighted the need for safeguards and ethical considerations in AI training and deployment.

Another example can be drawn from the realm of automated content creation. OpenAI’s GPT-3, despite its impressive abilities, occasionally produces outputs that could be deemed inappropriate or biased. Without adequate checks, such a tool could inadvertently generate content that misrepresents a company's brand or values, leading to potential PR nightmares and loss of stakeholder trust.

The realm of visual AI hasn’t been without its own controversies. Tools like Deepfakes, empowered by generative adversarial networks (GANs), have raised alarms for their ability to create hyper-realistic but entirely fictitious content. In the wrong hands, such capabilities could lead to misinformation, fraud, or reputation damage.

Remember that AI development, especially in the generative domain, is a rapidly evolving field so best practices will continue to change over time. Staying updated, being adaptable and focusing on acceptable use will be crucial for successful implementation.

Reach out to ProArch for guidance on your AI strategy, tools, and policy.