Role of the human programmer amidst ChatGPT and advancing AI technologies

I am interested in what everyone’s thoughts are on how applications such as ChatGPT and advancing AI technologies in general will change the role and working landscape for human programmers/developers.

As a relatively new web developer, it is concerning to me that technologies like this will slowly phase out people like myself, leaving room only extremely specialized or experienced developers . I have used ChatGPT and think it is an amazing tool, especially for learning, but I can’t help worry about the threat they pose to my ability to continue to progress in my career as a web developer/programmer.

I would love to hear your opinion!

3 Likes

I think tools based on LLMs like ChatGPT / Copilot / etc will help programmers more than take their jobs. LLMs can’t reason, but they can do a lot of grunt work writing boilerplate for us.

One use case for people who like to use multiple programming languages is to use the tool to reduce the cost of switching from one language to another. The tool can get the syntax right and it’s much faster than searching a language manual or stackoverflow.

1 Like

I hope it will be like having the rust compiler’s helpful messages in languages that can be a huge pain to debug. I have some Python libraries, that use C++ libraries via Boost and the nonsensical error-messages that come up are maddening.

Why should you ever see any more syntax at all? Just explain your wishes; the tool will get the syntax right it done.

Natural language is ambiguous and imprecise. There’s a long history of trying to do programming in natural language, and the problems don’t all go away with AI.

There are two possibilities here: either AGI happens and we have AIs able to reason as well as humans, or it doesn’t happen, at least in the near future. In the former case, there will be no more need for human programmers.

In the latter, I think we’ll continue using formal languages as a way to nail down the precise meaning of programs : we need to at least be able to verify and correct what the AI/LLM has produced. What I can see happening is that the formal languages become significantly higher-level than what we use today.

(Even in the first case it could be argued that it would be useful to have humans verifying what the AI does. So to avoid the problems with natural language, the communication between AIs and humans regarding programs would be more efficiently done in a formal language.)

4 Likes

I thought this essay in the New Yorker by Ted Chiang was an interesting perspective. He compares ChatGPTs ability to paraphrase/statistically generate boilerplate to JPEG compression. I think he might be a little too dismissive in his conclusion, but “lossy compression of knowledge” is an interesting take non-the-less. :slight_smile:

3 Likes

ChatGPT reminds me of the song “Blue Sky Mine” by Midnight Oil (see Blue Sky Mine by Midnight Oil - Songfacts for the meaning of the lyrics.) Like so many technologies before it, ChatGPT rises with grand prognostications by prominent pundits of profits and prosperity.

But it’s software. It’s complex software built on other equally complex software. Software built by humans backed up by a corpus of human texts filtered by humans. AI stands for Artificial Intelligence. It’s certainly artificial. But as for intelligent, do we actually understand how our own intelligence works? If so, why haven’t we come up with cures for murder and deceit?

ChatGPT will shine for a while until some government or corporate fool (who can make a decision without having to face the consequences) actually relies on it for something important, leading to the deaths of thousands.

4 Likes

This argument proves too much. Not only does it imply that AI cannot solve our problems, it also proves that human programmers cannot either.

If you can solve this problem, then you can also solve the same problem with the AI.

It seems to me that a rather disturbing number of people are conflating problem solving with code generation. I would argue that as programmers we are primarily engaged with the former, and the fact that we happen to solve problems by writing code is simply a consequence of our particular place in the development of technology. And I really don’t think many people would disagree with this. So my real question is, why do we place so much emphasis on the code, as opposed to what we are doing with the code, which is solving problems, so much so that some of us feel threatened by a technology that (very poorly IMO) assists in generating the code?

My hope is that over the course of time this will become more clear. As we create better tools to aid with the generation of the code, the actual role of the developer will evolve as we abstract away more and more of what isn’t the essence of creative problem solving, which IMO will always be a distinctly human thing.

Finding code on the internet is nothing new. If you ask chatGPT for the code to accomplish a certain task, it is not doing anything that you couldn’t do by looking up answers on Stack Overflow. It’s much faster… but that’s precisely the problem! Finding the solution to a problem is not supposed to be fast! It’s supposed to be done with great deal of discernment, which is developed through a lifetime of expertise, taking into account the unique set of constraints the situation presents. And every potential solution involves tradeoffs. All these critical aspects of the process are precisely the ones being foolishly circumvented, and I really hope we will figure this out before there are grave consequences.

5 Likes

In the end ChatGPT is autocomplete. It’s not AI in the form that people portray AI in science fiction movies. Version 10 of ChatGPT is unlikely to become much different either.

This also means that, because it was trained on human data, that it gives incredibly biased and unreliable (and therefore potentially dangerous) responses. Not only is the entire bot incredibly sexist and racist, it will cause death when people rely on it to give correct information (which people do, especially those who don’t understand it).

I am not a “sceptic” when it comes to tech like this. I love it. It’s super interesting, and exhilarating, but people need to understand what it is. ChatGPT does not replace programmers and will not replace programmers. It cannot do anything intuitive (and will not be able to do anything intuitive) that hasn’t been done before. It lies as if it’s the truth, and it is by no ones “smart”.

Other AI program(me)s have a much better chance of becoming the AI we “know” from science fiction stories, but this isn’t it.

3 Likes

Not only incredibly biased and unreliable, but it is more so due to restrictions, not even doing what it says it is doing.

When asked certain things, it will respond that it refuses to give “inappropriate” answers, yet, those answers would be appropriate given the context of the question asked. And in doing so, it also refuses to give appropriate answers.

It is even more irritating than I am.

And I am sure it will cause death because someone will rely on it, as you said “when people rely on it” as someone will. So definitely will cause death without doubt.

Sorry if the other message seemed aggressively stated. I deleted it.

1 Like

To see where I’m coming from, it might help to read this blog post.

2 Likes

I found the explanation of these limitations given by Gary Marcus on the Ezra Klein podcast quite good, a transcript is available: https://www.nytimes.com/2023/01/06/opinion/ezra-klein-podcast-gary-marcus.html

Introduction of the guest:

Gary Marcus is an emeritus professor of psychology and neural science at N.Y.U., and he’s become a leading voice of not quite A.I. skepticism, but skepticism about the A.I. path we’re on. Marcus is not an anti-A.I. guy. He has founded multiple A.I. companies himself. He thinks artificial intelligence is possible. He thinks it is desirable. But he doesn’t think that what we are doing now — making these systems that do not understand what they are telling us — is going to work out the way we are hoping it will.

2 Likes

You should not be afraid that ChatGPT or any other AI tool will replace human developers. It will not, at least at foreseeable future. Here’s my two cents why :

  • Business requirements are often incomplete, ambiguous or over-generic
  • Typical IT system, be it back-end or whatever, with which we developers are dealing in day-to-day job is overly-complicated. This means that any new design solution must be very narrow, fit into current architecture like a pizza slice into whole circle and be very comprehensively tested.
  • Human language as others has mentioned is very error-prone, ambiguous and computer non-friendly, so errors in AI generated code is inevitable (like in junior human programmers code too as well)
  • All AI tools “know how’s” are extrapolated from existing developers code base like GitHub and similar, which means that AI can only offer very generic code for a task, like some sort of “template”.
  • AI generated code is not aware about security holes, nor it solutions will be SQL injection-free or in general secure by other means. So human developer must check AI generated code template and fix security holes if any.

Given this AI will not (as it is now) generate end-to-end 100% complete code solution, because it doesn’t know business requirements completely (which some of it will be left only in the head of a task specifier as undocumented assumptions). Can’t do a guesswork, can’t apply developing techniques research work, which we human developers usually do. It doesn’t know (nor is trained for) a particular web system for which specific solution must be found. It can’t produce units tests for the task and system at hand. It’s solution will be too generic and needs more or less modification and adaptation if any. Sometimes it can even lead to incorrect path of developing procedures, because of many existing information “black holes” in the task management process. These holes are very well filled by the human developers, which requires iterative work, again - which is not penetrable by AI at the moment, because for it to succeed all work iterations must be very well defined and be properly inter-connected in the AI “mind”.

The conclusion is that mostly AI can generate only boilerplate and/or template code which could have only advisory function for the human developers.

P.S. It may have sounded like I’m an anti-AI guy. I’m not. I simply feel that there’s too much empty hype about “wonders” of AI and nobody really discusses practical drawbacks and risks using AI in corporate solutions.

1 Like

One man kills himself on the advice of an AI. Hopefully an isolated, one of a kind, never to be repeated event.

I read where someone who gathers a lot of data was going to pay a consultant $10K to write some code to manipulate the data. But they asked ChatGPT to try generating the code instead, and after a few iterations, they had working code and no need to pay someone else $10K. So there’s that.

ChatGPT will leverage people who know a little bit about programming to generate code beyond their ability. If I depended on earning at the skill level of ChatGPT, I would not have the warm fuzzies.

Between the people who are predicting a paradigm shift (like the invention of the steam engine or the assembly line) and those who are dismissive of it, the reality will probably be somewhere between, but may well move toward the paradigm shift.