Griffin on Tech: ChatGPT bans and the the rise of ‘FedGPT’

The Ministry of Business, Innovation and Employment this week became the first government department to explicitly ban staff from using ChatGPT.

MBIE cited data security and privacy risks as the reason for the blanket ban, but the Department of internal Affairs, home to the Government Chief Digital Officer, is allowing its staff to use ChatGPT and is development guidance for government agencies on its use.

That makes sense – DIA staff need to understand the technology they are formulating guidance on. But ChatGPT’s creator OpenAI makes clear that anything fed into the ChatGPT web interface will feed back to OpenAI and may inform its large language models. There have been high profile examples already overseas of embarrassing cases of proprietary data from text entered into ChatGPT turning up in responses to questions from other users.

That’s definitely a potential data security and privacy risk if text from internal memos and government data are plugged into ChatGPT. But talk of bans ignores the fact that ChatGPT features are also available via OpenAI’s application programming interface, a premium service that excludes customers’ data from the models.

Microsoft also has its Azure OpenAI cloud service available, which allows customers to use the generative AI tools on its data in a secure environment. Our government departments should be experimenting with these tools to see how they can be applied to the workings of government. There’s huge scope to cut down on bureaucratic tasks, gain better insights into government data and automate manual functions across government with this technology.

The large tech consulting firms that have a presence in New Zealand have already geared up to apply GPT (general pretrained transformer) technologies to the public sector. Accenture estimates that 39% of working hours in public sector organisations have a high potential for automation or augmentation.

Accenture is tailoring large language models to the needs of the US Federal Government with a service it has dubbed FedGPT.

“Agencies can curate FedGPT’s modular add-ons, called skills, to fit mission needs, available hardware, and budget – and skills can be developed, enhanced, or decommissioned over time as needs evolve,” Accenture says.

Deloitte is doing the same, suggesting that when it comes to creating government policy, “generative AI can ingest huge volumes of data about a problem, examine past policies and their results, and then create new policy ideas to help achieve the collective goal. Even fundamentally human challenges like reducing speeding or improving tax compliance could be modeled with agent-based discriminators”.

That’s all possible and generative AI has huge scope to streamline how government works. We can expect consulting firms to heavily push these services here, so it’s important that DIA helps lay the groundwork for the ethical use of these systems – and soon. With National promising to slash IT contracting in government if it is able to form the government in October, the big consulting firms may face another task – how to automate themselves out of lucrative government contracts.

Source: ITP New Zealand Tech Blog

Posted in Uncategorised and tagged .