He seems to be speaking mostly of greenfield development, the creation of something that has never been done before. My experience was always in the field of “computerizing” existing manual processes.
I agree with him regarding the difficulty of gathering requirements and creating specifications that can be turned into code. My experience working as a solo programmer for tiny businesses (max 20 employees) was that very few people can actually articulate what they want and most of those that can don’t actually know what they want. The tiny number of people left miss all the hacks that are already baked into their existing processes to deal with gaps, inconsistencies, and mutually contradictory rules. This must be even worse in greenfield development.
That is not saying anything negative. If it were any other way, then they would have had success hiring their nephew to do the work. :)
Where I think AI could useful during that phase of work is in helping detect those gaps, inconsistencies, and contradictory rules. This would clearly not be the AI that spits out a database schema or a bit of Python code, but would nonetheless be AI.
We have AI systems that are quite good at summarizing the written word and other AI systems that are quite good at logical analysis of properly structured statements. It strikes me that it should be possible to turn the customers’ system descriptions into something that can be checked for gaps, inconsistencies, and contradictions. Working iteratively, alone at the start, then with expert assistance, to develop something that can be passed on to the development team.
The earlier the flaws can be discovered and the more frequently that the customer is doing the discovery, the easier those flaws are to address. The most successful and most enjoyable of all my projects were those where I was being hired explicitly to help root out all those flaws in the semi-computerized system they had already constructed (often enough by a nephew!).
I’m not talking about waterfall development, where everything is written in stone before coding starts. Sticking with water flow metaphors, I’m talking about a design and development flow that has fewer eddies, fewer sets of dangerous rapids, and less backtracking to find a different channel.
I feel like AI would fall down even harder here. A lot of long running applications have “secret” rules in them that developers have as either tribal knowledge or they have to reas the code and see is the case. Will AI be sophisticated enough to read a massive repo probably dependent on several others and have a realistic understanding of the requirements inherent in that code system? Because that’s what we pay senior devs to be good at quickly figuring out. I find myself skeptical that AI will be able to do that in a trustworthy way with how it “hallucinates” now and doesn’t have the concept that it just doesn’t know sometimes. If a developer has to spend time checking the AI’s assertions about the rules, is that actually going to be faster than just keeping them in their mind or doing the research themselves?
I agree with most of what you said, but I think I was not clear in my presentation of the domain of operations. I was not speaking to the rewriting of an existing system, but if gathering requirements for a system that is intended to replace existing manual systems or to create systems for brand new tasks.
That is, there is no existing code to work with, or at least nothing that is fit for purpose. Thus, you are starting at the beginning, where people have no choice but to describe something they would like to have.
Your reference to hallucination leads me to think that you are limiting your concept of AI to the generative large language models. There are other AI systems that operate on different principles. I was not suggesting that a G-LLM was the right tool for the job, only that AI could be brought to bear in analyzing requirements and specifications.
I wasn’t talking about rewriting an existing system either. I’m talking about adding to a system. In order to do that effectively, you need to understand the system as it stands and consider how any requirement could clash or be impossible with the current set of requirements. This is why I bring up the AI needing to pull a set of requirements from the existing code. You cannot add requirements without knowing the requirements that already exist.
I think that hallucination is still a massive issue. I don’t even like to call it hallucination because what it really is bad guesses. We should never forget that all any AI does is guess. It doesn’t reason about anything or connect information together. AI will hold contradictory positions because of this.
Currently we have no way to make an AI declare that it just doesn’t know or even very often ask for more information in order to make a decision because the method of training an AI is literally guess and check.
For that reason, I don’t think that AI will ever be the tool for the job when it comes to any kind of requirements gathering. I mean I guess you could, but I always run the risk of being like that lawyer who had made up cases in this result. AI made things up because all it does it make its best guess and it doesn’t care I’d that guess is grounded in much of anything at all.
Ah, I understand now. Yes, I think that maybe I agree with you in general.
I still think that AI operated by ethical experts has much to offer when used not an automated replacement, but as a tool that can save time and help verify accuracy. I’m thinking in terms of a kind of teamwork where one member of the team is an AI system or assistant.
You’re right, the best part about AI is automating the annoying part of actually implementing what you want to code. Now you have more time to think about requirements and sped up the process to maybe get several iterations to really refine a product. However, ChatGPT is gonna stay as a helper function writer for the next few years I think
I think that ChatGPT is probably the wrong tool for what I’m imagining. I’m thinking more in terms of “hypothesis generators” and “theorem testers” that, as far as I know, are not using the methods of ChatGPT in their operation. I think that those kinds of tools and others like them could be used to help clarify requirements before coding even starts.
I think he’s missed a potential benefit of AI.
He seems to be speaking mostly of greenfield development, the creation of something that has never been done before. My experience was always in the field of “computerizing” existing manual processes.
I agree with him regarding the difficulty of gathering requirements and creating specifications that can be turned into code. My experience working as a solo programmer for tiny businesses (max 20 employees) was that very few people can actually articulate what they want and most of those that can don’t actually know what they want. The tiny number of people left miss all the hacks that are already baked into their existing processes to deal with gaps, inconsistencies, and mutually contradictory rules. This must be even worse in greenfield development.
That is not saying anything negative. If it were any other way, then they would have had success hiring their nephew to do the work. :)
Where I think AI could useful during that phase of work is in helping detect those gaps, inconsistencies, and contradictory rules. This would clearly not be the AI that spits out a database schema or a bit of Python code, but would nonetheless be AI.
We have AI systems that are quite good at summarizing the written word and other AI systems that are quite good at logical analysis of properly structured statements. It strikes me that it should be possible to turn the customers’ system descriptions into something that can be checked for gaps, inconsistencies, and contradictions. Working iteratively, alone at the start, then with expert assistance, to develop something that can be passed on to the development team.
The earlier the flaws can be discovered and the more frequently that the customer is doing the discovery, the easier those flaws are to address. The most successful and most enjoyable of all my projects were those where I was being hired explicitly to help root out all those flaws in the semi-computerized system they had already constructed (often enough by a nephew!).
I’m not talking about waterfall development, where everything is written in stone before coding starts. Sticking with water flow metaphors, I’m talking about a design and development flow that has fewer eddies, fewer sets of dangerous rapids, and less backtracking to find a different channel.
I feel like AI would fall down even harder here. A lot of long running applications have “secret” rules in them that developers have as either tribal knowledge or they have to reas the code and see is the case. Will AI be sophisticated enough to read a massive repo probably dependent on several others and have a realistic understanding of the requirements inherent in that code system? Because that’s what we pay senior devs to be good at quickly figuring out. I find myself skeptical that AI will be able to do that in a trustworthy way with how it “hallucinates” now and doesn’t have the concept that it just doesn’t know sometimes. If a developer has to spend time checking the AI’s assertions about the rules, is that actually going to be faster than just keeping them in their mind or doing the research themselves?
I agree with most of what you said, but I think I was not clear in my presentation of the domain of operations. I was not speaking to the rewriting of an existing system, but if gathering requirements for a system that is intended to replace existing manual systems or to create systems for brand new tasks.
That is, there is no existing code to work with, or at least nothing that is fit for purpose. Thus, you are starting at the beginning, where people have no choice but to describe something they would like to have.
Your reference to hallucination leads me to think that you are limiting your concept of AI to the generative large language models. There are other AI systems that operate on different principles. I was not suggesting that a G-LLM was the right tool for the job, only that AI could be brought to bear in analyzing requirements and specifications.
I wasn’t talking about rewriting an existing system either. I’m talking about adding to a system. In order to do that effectively, you need to understand the system as it stands and consider how any requirement could clash or be impossible with the current set of requirements. This is why I bring up the AI needing to pull a set of requirements from the existing code. You cannot add requirements without knowing the requirements that already exist.
I think that hallucination is still a massive issue. I don’t even like to call it hallucination because what it really is bad guesses. We should never forget that all any AI does is guess. It doesn’t reason about anything or connect information together. AI will hold contradictory positions because of this.
Currently we have no way to make an AI declare that it just doesn’t know or even very often ask for more information in order to make a decision because the method of training an AI is literally guess and check.
For that reason, I don’t think that AI will ever be the tool for the job when it comes to any kind of requirements gathering. I mean I guess you could, but I always run the risk of being like that lawyer who had made up cases in this result. AI made things up because all it does it make its best guess and it doesn’t care I’d that guess is grounded in much of anything at all.
Ah, I understand now. Yes, I think that maybe I agree with you in general.
I still think that AI operated by ethical experts has much to offer when used not an automated replacement, but as a tool that can save time and help verify accuracy. I’m thinking in terms of a kind of teamwork where one member of the team is an AI system or assistant.
You’re right, the best part about AI is automating the annoying part of actually implementing what you want to code. Now you have more time to think about requirements and sped up the process to maybe get several iterations to really refine a product. However, ChatGPT is gonna stay as a helper function writer for the next few years I think
I think that ChatGPT is probably the wrong tool for what I’m imagining. I’m thinking more in terms of “hypothesis generators” and “theorem testers” that, as far as I know, are not using the methods of ChatGPT in their operation. I think that those kinds of tools and others like them could be used to help clarify requirements before coding even starts.