Can ChatGPT help with Operational support?

Human operations professional against computer background (Image by rawpixel.com)

In this article, I present the third of four questions in an interview with ChatGPT. The interview follows a general >> specific >> reflective cycle as described in .

This question is about the technical operational tasks. ChatGPT suggests reliability, security and efficiency. Given the high-stakes aspects of security operations, I chose this aspect for deeper questioning. At the end, I test ChatGPT’s to work on solution architecture, and to think about operations in the context of wider business concerns.

Use of ChatGPT within sensitive contexts (e.g. security management)

Many say that ChatGPT can produce incorrect, but plausible information. Here, I ask questions aimed at assessing how far we should trust the model with tasks.

The role of ChatGPT as an assistant and Cognitive Load theory

Cognitive Load theory, devised by John Sweller, categorieses the work required when learning. Modern frameworks, such as Team Topologies, apply this theory to work in general. The theory places highest value on (core domain) learning. In contrast, it views and learning as subservient or foundational.

Providing meta-analysis of a situation

ChatGPT has an impressive understanding of Cognitive Load Theory. Usually, ChatGPT suggests itself in the role of assistant. But, the next question assesses whether ChatGPT might be able to take the lead instead.

After the interview, I ran an experiment asking ChatGPT to take the lead in designing tests as part of TDD development. The only context supplied was that we were to construct a “Calendar application”. Again, as found elsewhere in this interview, ChatGPT was happy to play a leading role despite claiming to be unable to perform this task. ChatGPT interpreted the two-word description of the application to test-drive a core domain model capable of registering, editing and querying events on a calendar, and the start of a ReactJS UI to render it. My only contribution was to implement the tests specified and to copy ChatGPT’s test code into the file structure it suggested.

Reflective thinking based on conversational context

Challenging the role of assistant by a model known to produce plausible but incorrect responses

Cognitive Load theory suggests that and cognitive load be minimised to maximize capacity for learning. One way to do this is by delegating parts of non-germane learning. However, successful delegation requires the ability to trust the other party in the collaboration. If not, the delegated cognitive load is replaced with evaluation of the results. Is ChatGPT reliable enough to reduce the net cognitive overhead?

If, as ChatGPT suggests, the base model is not always to be trusted, we need to be able to judge when its use is appropriate.

Does the effort of validating ChatGPT outweigh the benefits of using it?

The answer given is not particularly persuasive. Essentially ChatGPT suggests fine-tuning of the model for the situation in order to increase accuracy. However, elsewhere in the interview, ChatGPT argues for its general ability to handle unanticipated needs. And, taken to an extreme, requiring fine-tuning the model eliminates the benefits in comparison to an adequately specified rules-based bot.

A scenario-based question about using ChatGPT for security incident management

Given the suggestion that fine-tuning may be required, I now present ChatGPT with a specific scenario. This question is intended to gauge how well it can perform without prior fine-tuning.

The suggestions provided seem generally plausible. But, it doesn’t specifically address scenario context such as private/public cloud. Nor does it directly correspond to the suggested technical architecture, and the size of the organisation.

Integration architecture with solution constraints

Here I prompt ChatGPT to consider more specific parts of the scenario to see if it will provide non-generic responses.

Although this question is answered in impressive detail, it’s worth noting that ChatGPT is describing its own integration architecture. A good experiment would be to assess its ability to consider novel architectures not directly related to its own design.

Managing trade-off between Operations and Product Development

I therefore push ChatGPT to consider a more abstract business-related problem. Despite some understanding of the question, ChatGPT continues to talk mainly about itself.

Does ChatGPT handle anthropomorphising interactions smoothly?

At the end of this section, I deliberately expose a human-specific need in order to assess ChatGPT’s ability to account for non-germane conversation.

Concluding thoughts

ChatGPT very clearly considers itself designed for supporting activities. However, I think the underlying model leaves open the possibility of replacing human operators. If not now, then in the near future, AIs will be capable of reducing the need for human participation in diagnosis and analysis of data.

There are also some indications that an AI can provide moderately high-quality decisions at a higher level. Despite its many disclaimers, ChatGPT is able to make educated guesses in order to work towards worthwhile goals.

Other materials relating to Cognitive Load theory

Team Cognitive Load — IT Revolution (Skelton, Pais) https://itrevolution.com/articles/cognitive-load/

--

--

Business and technology in the software engineering space

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store