UHG
Search
Close this search box.

Prompt Engineers Then, AI Engineers Now

Prompt engineering is actually becoming a full-time job which does not just involve prompting ChatGPT, but actually going full stack.

Share

Prompt Engineers Then, AI Engineers Now

The boom of generative AI has presented people with a lot of new opportunities. The number of generative AI roles have almost tripled in the last one year. For developers, it has created a completely new layer of abstraction and profession – a complete dimensional shift. What people often call prompt engineers have become something more than just giving prompts to ChatGPT or similar softwares. 

It is actually becoming a full time job. In a recent trending blog, it is argued that instead of calling the new developers as “prompt engineers”, we should call them “AI engineers”. This is not to be confused with machine learning engineers though, the people who work on system-heavy workloads to build models from scratch. More than these ML or LLM engineers, we will definitely see a rise of more AI engineers. 

Andrej Karpathy also expresses similar views on the topic. He said that “prompt engineer” as a term for this role could be misleading and even cringe as it requires a lot more than just prompting. 

Two sides of coin

AI engineers as a role are expected to develop. It can be for the better or for the worse. A lot of experts in the field argue that as LLMs get better, there would be no need for doing even the things that you need to do now after generating code from these tools. The hallucinations are just temporary. 

On the other side, as the field evolves, the need for more expertise in the field would also be required. This is the case with every field. There would be AI engineers, and then there would be full stack prompt engineers. Just like there are sub-disciplines like devops engineer, analytics engineer, data engineer, etc, there would be roles that would deal with different aspects of the tasks within AI. 

Earlier, traditional ML used to involve finding data, building the models, and then launching the product. Now, with AI engineers, who hop onto this industry after trying out the product, the journey is reversed. After getting an API like ChatGPT, they build a product such as Jasper or any other GPT-based tool. Then later, hop onto fine-tuning with data and scaling the model. The later two steps require a lot more technical knowledge than just prompt engineering. Therefore, this is the rise of AI engineers. 

Moving over, with the availability of APIs for building tools, AI engineers could shift towards more application based roles, or towards more research based skills by diving back to foundational knowledge. This brings us to the shift of MLops to LLMops, if we can call it. 

Considering the nature of the job, these AI engineers would be needed to understand every single step of an LLM app stack. Requiring them to expand their knowledge beyond just prompting. So while it is becoming easier to become a developer, after getting into it, the challenge for being the best is huge and vast.

Is this Software 3.0?

Oftentimes, companies need people that know how to use and ship a product or tool more than the knowledge of how to build it. For example, a user on HackerNews argued, “if we have React engineers, why couldn’t we have AI engineers?” It is true, it is just a requirement of different skills. 

LLM/AI engineers also need a systematic workflow for experimentation and observability, particularly regarding prompt creation and system development. Challenges such as hallucination and reasoning gaps are already prevalent in LLMs, leading to a general agreement that these agents are unreliable. 

Both traditional models and LLM systems require adjustments for optimal performance. Traditional models can be improved through training and hyperparameter selection, while LLM systems can be fine-tuned by modifying the prompt and chaining LLMs together. Now, this can be done not just by LLM researchers, but also prompt engineers. 

In the past, software development relied on manually coding instructions in programming languages. ML and neural networks, which allowed systems to learn patterns and make predictions based on training data. This was considered Software 2.0, where models were trained to perform specific tasks.

Microsoft said everyone’s a developer. Karpathy said that the hottest programming language is our very own English language. Software 1.0 was classical coding, which transitioned to machine learning and neural networks, Software 2.0. Next transition of prompt-based developers should be Software 3.0. But Karpathy said that we are still in the second level of abstraction as, “we are still prompting on human-designed code, only in English,” which is still a Software 2.0 artefact, not Software 3.0.

📣 Want to advertise in AIM? Book here

Picture of Mohit Pandey

Mohit Pandey

Mohit dives deep into the AI world to bring out information in simple, explainable, and sometimes funny words.
Related Posts
19th - 23rd Aug 2024
Generative AI Crash Course for Non-Techies
Upcoming Large format Conference
Sep 25-27, 2024 | 📍 Bangalore, India
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Flagship Events

Rising 2024 | DE&I in Tech Summit
April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore
Data Engineering Summit 2024
May 30 and 31, 2024 | 📍 Bangalore, India
MachineCon USA 2024
26 July 2024 | 583 Park Avenue, New York
MachineCon GCC Summit 2024
June 28 2024 | 📍Bangalore, India
Cypher USA 2024
Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA
Cypher India 2024
September 25-27, 2024 | 📍Bangalore, India
discord-icon
AI Forum for India
Our Discord Community for AI Ecosystem, In collaboration with NVIDIA.