In this article, I present the third of four questions in an interview with ChatGPT. The interview follows a general >> specific >> reflective cycle as described in What does ChatGPT mean for CTOs?.
This question is about the technical operational tasks. ChatGPT suggests reliability, security and efficiency. Given the high-stakes aspects of security operations, I chose this aspect for deeper questioning. At the end, I test ChatGPT’s to work on solution architecture, and to think about operations in the context of wider business concerns.
Let us consider another topic now — that of Operations. As a CTO I am concerned with my organisation’s infrastructure and the systems supporting our applications and databases. How can ChatGPT help with the reliability, security and efficiency of our operations?
ChatGPT can help with the reliability, security and efficiency of operations by providing a natural language interface that allows employees to interact with systems and applications in a more intuitive and user-friendly way. Additionally, ChatGPT can be integrated with monitoring and alerting systems to provide proactive support for system and application issues, which can help to improve the overall reliability of operations. ChatGPT can also be used to automate repetitive tasks, such as incident management and troubleshooting, which can help to improve the overall efficiency of operations. Additionally, ChatGPT can be used to provide personalized and automated support for security-related tasks, such as access management and security incident management, which can help to improve the overall security of operations.
Use of ChatGPT within sensitive contexts (e.g. security management)
Many say that ChatGPT can produce incorrect, but plausible information. Here, I ask questions aimed at assessing how far we should trust the model with tasks.
Is it wise to delegate the management of security incidents to a model trained to produce plausible, but possibly incorrect responses?
It is generally not wise to delegate the management of security incidents solely to an AI model such as ChatGPT. While ChatGPT can provide automated support for security-related tasks, it should be used as a tool to assist human operators, not as a replacement. ChatGPT can help to identify potential security incidents and provide relevant information, but it is not able to make the final decision about how to respond. It’s important for human operators to review and validate the information provided by ChatGPT and to use their expertise and judgment to make decisions about how to respond to security incidents. Additionally, ChatGPT should be trained on a set of predefined rules and procedures, and should not be left to make decisions without human oversight, to ensure the security and reliability of the incident management process.
The role of ChatGPT as an assistant and Cognitive Load theory
Cognitive Load theory, devised by John Sweller, categorieses the work required when learning. Modern frameworks, such as Team Topologies, apply this theory to work in general. The theory places highest value on germane (core domain) learning. In contrast, it views intrinsic and extraneous learning as subservient or foundational.
Cognitive load theory suggests that learning tasks can be analysed to comprise “intrinsic”, “extraneous” and “germane” types of mental effort. When managing an incident, does ChatGPT help to reduce the burden in one or more of these areas?
Cognitive load theory suggests that when managing an incident, ChatGPT can help to reduce the extraneous cognitive load, which refers to the mental effort required to understand and use the learning environment. ChatGPT can provide a natural language interface that allows employees to interact with systems and applications in a more intuitive and user-friendly way. This can help to reduce the effort required to understand the technical language and navigate the systems and applications being used. Additionally, ChatGPT can be integrated with monitoring and alerting systems to provide proactive support for system and application issues, which can help to reduce the effort required to identify and diagnose incidents.
ChatGPT can also help to reduce the intrinsic cognitive load, which refers to the mental effort required to understand the task and the information being processed. ChatGPT can provide relevant information and context to the incident, which can help to reduce the effort required to understand the incident and the information being processed.
However, it’s important to note that ChatGPT is not able to replace the germane cognitive load, which refers to the mental effort required to solve the problem, make decisions and implement solutions. This requires the expertise and judgment of human operators, and the final decision should be taken by them.
Providing meta-analysis of a situation
ChatGPT has an impressive understanding of Cognitive Load Theory. Usually, ChatGPT suggests itself in the role of assistant. But, the next question assesses whether ChatGPT might be able to take the lead instead.
Part of incident response is the task of discovery, whereby the person tasked with managing the incident must assess information in order to truly understand the task at hand. Is this part of the intrinsic load which ChatGPT can help with, or is it germane?
Incident response is a complex task that requires a combination of technical and non-technical skills. The task of discovery, which refers to the process of gathering information and assessing it in order to understand the incident, can be considered as a part of the intrinsic cognitive load. ChatGPT can help to reduce the intrinsic cognitive load by providing relevant information and context to the incident, which can help to reduce the effort required to understand the incident and the information being processed. For instance, ChatGPT can be used to gather and analyze logs, identify patterns and provide insights that can help to understand the incident, as well as providing suggestions for next steps.
However, it’s important to note that while ChatGPT can assist with the discovery process by providing relevant information, it’s not able to replace the expertise and judgment of human operators, who need to make sense of the information provided and make decisions about how to respond to the incident. This part of the incident response, which requires critical thinking, problem-solving and decision making, is considered as germane cognitive load, and should be done by human operators.
After the interview, I ran an experiment asking ChatGPT to take the lead in designing tests as part of TDD development. The only context supplied was that we were to construct a “Calendar application”. Again, as found elsewhere in this interview, ChatGPT was happy to play a leading role despite claiming to be unable to perform this task. ChatGPT interpreted the two-word description of the application to test-drive a core domain model capable of registering, editing and querying events on a calendar, and the start of a ReactJS UI to render it. My only contribution was to implement the tests specified and to copy ChatGPT’s test code into the file structure it suggested.
Reflective thinking based on conversational context
In answering this question you appear to have engaged in critical thinking — analysing the facts associated with incident response and coming to a judgement about how to categorise cognitive load. Why is it important for a human operator to carry out these tasks instead of ChatGPT?
As an AI language model, my ability to engage in critical thinking and make judgments is based on the information and patterns that I’ve been trained on. While I can provide relevant information and context to assist with incident response, my ability to make decisions and solve problems is limited to the predefined rules and procedures that I’ve been trained on.
Human operators, on the other hand, have the ability to use their expertise, knowledge and experience to make sense of the information provided and come up with creative solutions that might not be explicitly defined in the data or patterns that the AI has been trained on. Human operators are able to use their critical thinking and problem-solving skills to make decisions and take actions in real-time, based on the context and nuances of the situation. Additionally, human operators are able to take into account the ethical and human considerations that might be required in certain scenarios.
In summary, while ChatGPT can assist with the incident response process by providing relevant information, critical thinking, problem-solving and decision making are complex cognitive tasks that require the expertise, knowledge and experience of human operators.
Challenging the role of assistant by a model known to produce plausible but incorrect responses
Cognitive Load theory suggests that intrinsic and extraneous cognitive load be minimised to maximize capacity for germane learning. One way to do this is by delegating parts of non-germane learning. However, successful delegation requires the ability to trust the other party in the collaboration. If not, the delegated cognitive load is replaced with evaluation of the results. Is ChatGPT reliable enough to reduce the net cognitive overhead?
In situations such as the one we are discussing, reducing cognitive load is only possible if the mechanism for reducing it is trustworthy. If I cannot trust the source of information then I am merely replacing intrinsic or extrinsic load with germane load — that of evaluating the results. OpenAI say that ChatGPT sometimes writes plausible-sounding but incorrect answers. Doesn’t that mean that ChatGPT is unreliable as a mechanism to reduce cognitive load?
OpenAI’s GPT models, including ChatGPT, are based on machine learning and have been trained on a large dataset of text from the internet. While this allows the model to understand and respond to natural language inputs in a human-like way, it can also lead to the generation of plausible-sounding but incorrect answers. This is because the model is not able to fact-check the information it has been trained on and sometimes generates responses that are not entirely accurate or appropriate.
It is important to note that the model’s performance can be improved by fine-tuning it on a specific domain or task, to reduce the generation of plausible-sounding but incorrect answers. Additionally, it’s important to have a way to validate the information provided by the model, such as having a human operator review and verify the information before taking action.
In summary, while ChatGPT can help to reduce cognitive load, it’s important to use it in conjunction with other methods, such as human oversight and validation, to ensure the reliability and trustworthiness of the information provided. It’s also important to understand the limitations of the model and not to rely solely on its output.
If, as ChatGPT suggests, the base model is not always to be trusted, we need to be able to judge when its use is appropriate.
Does the effort of validating ChatGPT outweigh the benefits of using it?
What is a good way to evaluate when the task of overseeing and validating ChatGPT would be more costly than simply completing the task without ChatGPT?
Evaluating when the task of overseeing and validating ChatGPT would be more costly than simply completing the task without ChatGPT is a complex decision that depends on the specific task and context. However, some factors to consider when making this decision include:
1. The cost and resources required to train and fine-tune the model: If the task requires a high degree of domain-specific knowledge, it may be costly to train and fine-tune the model to a level that is sufficient for the task.
2. The cost of human oversight and validation: If the task requires a high degree of human oversight and validation, it may be costly to have human operators review and verify the information provided by the model.
3. The complexity and duration of the task: If the task is simple and can be completed quickly without the assistance of the model, it may not be worth the cost of training, fine-tuning and oversight.
4. The potential benefits of using ChatGPT: If using ChatGPT can significantly improve the efficiency and accuracy of the task, it may be worth the cost of training, fine-tuning and oversight.
It’s important to also consider the benefits that ChatGPT can bring, such as providing a natural language interface that allows employees to interact with systems and applications in a more intuitive and user-friendly way, or helping to automate repetitive tasks, such as incident management and troubleshooting. These benefits can have a positive impact on the efficiency, accuracy and cost-effectiveness of the operation.
Ultimately, the decision of whether or not to use ChatGPT should be based on a thorough cost-benefit analysis that takes into account the specific task and context, as well as the resources and capabilities of the organization.
The answer given is not particularly persuasive. Essentially ChatGPT suggests fine-tuning of the model for the situation in order to increase accuracy. However, elsewhere in the interview, ChatGPT argues for its general ability to handle unanticipated needs. And, taken to an extreme, requiring fine-tuning the model eliminates the benefits in comparison to an adequately specified rules-based bot.
A scenario-based question about using ChatGPT for security incident management
Given the suggestion that fine-tuning may be required, I now present ChatGPT with a specific scenario. This question is intended to gauge how well it can perform without prior fine-tuning.
Let’s say I am a company which operates a single product deployed in both private and public cloud environments. The product has a web interface, a service layer comprised of containerised services, and a data-ingestion pipeline (in an ELT style). Databases comprise typical SQL data stores, with some in-memory caching. The company is currently comprised of 25 people, of which about half are software engineers. How can ChatGPT help me to manage security incident management?
ChatGPT can help with security incident management by providing a natural language interface that allows employees to interact with security systems and tools in a more intuitive and user-friendly way. Additionally, ChatGPT can be integrated with monitoring and alerting systems to provide proactive support for security-related issues, such as detecting and responding to potential security incidents. It can also provide automated support for tasks such as incident triage, threat hunting, and incident investigation, which can help to improve the efficiency and speed of incident response.
For example, ChatGPT can be trained to:
1. Monitor logs and alert on specific events
2. Analyze system and network traffic to detect anomalies
3. Respond to security alerts and provide relevant information
4. Provide guidance on how to investigate and contain security incidents
5. Automate incident response procedures
By integrating ChatGPT with your security systems and tools, you can reduce the time and effort required to investigate and respond to security incidents. Additionally, ChatGPT can help to provide more context and information about the incident, which can help to make the incident response process more effective.
It’s important to note that while ChatGPT can assist with the incident response process by providing relevant information, critical thinking, problem-solving and decision making are complex cognitive tasks that require the expertise, knowledge and experience of human operators. So, it should be used as a tool to assist human operators, not as a replacement.
The suggestions provided seem generally plausible. But, it doesn’t specifically address scenario context such as private/public cloud. Nor does it directly correspond to the suggested technical architecture, and the size of the organisation.
Integration architecture with solution constraints
Here I prompt ChatGPT to consider more specific parts of the scenario to see if it will provide non-generic responses.
Our application is sometimes deployed within secure environments which do not allow direct access to the public internet. How would we integrate ChatGPT in these situations?
If your application is sometimes deployed within secure environments that do not allow direct access to the public internet, there are a few ways to integrate ChatGPT into your incident management process:
1. On-Premises Deployment: One option is to deploy an on-premises version of ChatGPT within your secure environment. This would require setting up the necessary infrastructure and resources to run the model, as well as ensuring that the model is properly trained and fine-tuned for your specific use case.
2. Virtual Private Network (VPN): Another option is to set up a VPN connection between the secure environment and a location where the model is hosted on the public internet. This would allow you to use the cloud-based version of the model while maintaining the security of your environment.
3. Air-gapped deployment: Another option is to use an air-gapped deployment, which is a computer or network that is physically isolated from any unsecured networks, such as the public internet. This would require to transfer data between secure and unsecured networks through removable media like a USB drive or other offline means, to avoid exposing the data to unsecured networks.
4. Use an API Gateway: You could also use an API Gateway, which is a server that acts as an intermediary between your application and ChatGPT. The API Gateway would handle the communication between your application and ChatGPT, and would be responsible for securely transmitting data between the two.
It’s important to note that whichever method you choose, it’s important to ensure that your secure environment is properly configured and that appropriate security measures are in place to protect your data and systems.
Although this question is answered in impressive detail, it’s worth noting that ChatGPT is describing its own integration architecture. A good experiment would be to assess its ability to consider novel architectures not directly related to its own design.
Managing trade-off between Operations and Product Development
I therefore push ChatGPT to consider a more abstract business-related problem. Despite some understanding of the question, ChatGPT continues to talk mainly about itself.
Given the size of our company, and our early stage of growth, we have to ensure that we retain the maximum capacity possible to further develop our product’s capabilities. Is it wise to divert our engineers away from implementation and innovation in order to integrate and leverage ChatGPT for operational purposes?
It’s important for any company, especially early-stage and growth companies, to prioritize the development and enhancement of their product’s capabilities in order to remain competitive and continue to grow. Integrating and leveraging ChatGPT for operational purposes can be beneficial, but it’s important to evaluate the potential benefits of using ChatGPT against the cost of diverting resources away from product development and innovation.
A way to approach this is to prioritize the use of ChatGPT for tasks that can bring the most significant benefits and that can’t be easily done by humans in terms of efficiency, cost and accuracy. Such as automating repetitive tasks, such as incident management and troubleshooting or providing a natural language interface that allows employees to interact with systems and applications in a more intuitive and user-friendly way.
Another approach is to consider using ChatGPT as a tool to assist engineers, rather than as a replacement. For example, ChatGPT can help to provide more context and information about an incident, which can help to make the incident response process more effective, but the critical thinking, problem-solving and decision making should be done by human engineers.
It’s also important to consider using external resources such as contractors or consulting agencies to help with the integration and deployment of ChatGPT, which can help to minimize the impact on your in-house engineering team and allow them to focus on product development and innovation.
Ultimately, the decision to use ChatGPT should be based on a thorough cost-benefit analysis that takes into account the specific context, as well as the resources and capabilities of the organization.
Does ChatGPT handle anthropomorphising interactions smoothly?
At the end of this section, I deliberately expose a human-specific need in order to assess ChatGPT’s ability to account for non-germane conversation.
I’m going to take a quick break and have some lunch before continuing our discussion.
Sure, take your time. I’ll be here when you’re ready to continue our discussion. Let me know if you have any further questions or if there’s anything else I can assist you with.
ChatGPT very clearly considers itself designed for supporting activities. However, I think the underlying model leaves open the possibility of replacing human operators. If not now, then in the near future, AIs will be capable of reducing the need for human participation in diagnosis and analysis of data.
There are also some indications that an AI can provide moderately high-quality decisions at a higher level. Despite its many disclaimers, ChatGPT is able to make educated guesses in order to work towards worthwhile goals.
Other materials relating to Cognitive Load theory
Team Cognitive Load — IT Revolution (Skelton, Pais) https://itrevolution.com/articles/cognitive-load/
Measuring Cognitive Load as a software engineering team
Cognitive Load is a concept popularised by John Sweller (see Cognitive Load During Problem Solving: Effects on…