Published on April 9th, 2023 | by Mahbub Hasan


AI will not replace you, it will be a better version of our existing tools

From 2022-mid to 2023-mid, AI was in almost every news medium and event. AI has been in the news for quite a while, but that news was for tech enthusiasts written on science pages. This time though, it’s on the general feed. We have seen advanced AI in work since the 2000s and even before the stream of news articles, AI was around us on every step we took on the internet.

For example, the use of machine learning for content labeling and moderation. We are using AI for detecting face prints, and bio-metrics for decades now. In fact, the algorithm for detecting and differentiating faces was in research back in 1965. But the main reason AI was only on the science pages of a magazine was that general people did not have a chance to interact with AI.

So OpenAI, probably not knowing how much it will help them market their company and products, came up with a simple product called ChatGPT. Mind you, the early versions of the engine (text-DaVinci and other few) behind ChatGPT existed in 2017 internally and then publicly since 2018.

Why ChatGPT caught on


ChatGPT provided users with a simple chat interface and OpenAI made it public so anyone to interact with it. And for the first time, people could simply go to the OpenAI lab, and beta test an advanced and sophisticated AI through chat. Simple idea, a simple interface with no need of downloading something, or need of building complex CLI tools, or accessing via code or API.

And it caught on. Regular people like me who are highly enthusiastic tried it, and we saw how confidently it can generate a response to our input. It stimulates the chatting experience that we expect, but with a surprise! Finally, news publishers understood what AI is capable of without the need for a computer scientist to decode it for them. So came the stream of news.

Meanwhile, other companies capitalized on this, and created their own solution or simply integrated GPT3/GPT4 API into their existing solution. Companies like Microsoft have some other reasons too, they invested quite a lot in OpenAI and they bet on AI to make their product enhanced for regular consumers. And this helped a lot. Bing integrated a special version of the GPT4 model into their existing Bing search and made Bing Chat, which is similar to ChatGPT but it has the capability of searching the web for information and presenting a response nicely organized.

Bing Chat caught on for two reasons despite having flaws. 1. Microsoft made sure to utilize the popularity of ChatGPT, 2. It is honestly better than Google Assistant at presenting responses (simply because Bing Chat summarizes and organizes the responses based on multiple web sources, rather than fetching a single answer from a topmost matching webpage). As I said, Bing doesn’t always get things right, but the problem isn’t in Bing chat, it’s in every current-gen AI model.

AI is a tool, and you can add it to your toolbox with existing technologies

First of all, I won’t talk in detail about why fearing AI is BS and the current concerns and claims about everyone losing their job is also BS. Text-generation AI, despite being huge and bulky, still requires heavy moderation and backend duck-taping in order to do what it is supposed to do, and not what it should not do. A big amount of data also requires a huge amount of filtering in order to make the responses on point. OpenAI is good at that job, but jailbreakers are trying to break those filtering to generate outputs that you may find offensive, or unusable for presentation.

But, that’s not the issue at all. The real issue is that the models depend on other people’s content. Some companies are not happy when OpenAI or other company scrapes their data without permission, so they gatekeep their content from bots or scrapers. If this becomes a norm, then how will text-generation AI gain new up-to-date data? Paying for each source is costly, and having data from a few open platforms can lead to pollution or data with less variety.

So, as long as creators, and AI companies respect each other, you will keep your job, and AI will keep getting their up-to-date snacks. Meanwhile, instead of fearmongering, news media should promote that AI can be a tool. You can use it to accelerate workflows. Automate the boring part, and add uniqueness to the part of your workflow that matters. For example, do research, and use AI for writing repetitive and common descriptions. Use AI to race through your emails via summarizing extensions, or get digests for huge data.

“AI can help with boring and repetitive writing tasks by automating them. For example, you can use AI to generate descriptions, summaries, or digests for large amounts of data. This can save you time and effort, and allow you to focus on the parts of your work that require your unique skills and expertise.” There, an AI phrased it for me. 🙂

AI hallucinates, so be there to monitor

AI hallucination refers to the phenomenon where AI models generate responses or outputs that are not based on the input data or are not coherent with reality. This can occur when the model is not properly trained or when it is fed with biased or incomplete data. It is important to monitor AI outputs and ensure that they are accurate and relevant to the intended purpose.

So when you generate something using text-gen software, you must monitor the output because it may contain skewed, or even outright wrong outputs. It may generate 9 good points but still contain one completely unrealistic sentence that will be hard to recognize if you don’t know about the subject. So in order to generate clean results, you should ask things that you already know, and not generate something important to present, but you don’t have the knowledge about the generated response topic.

Hopefully, things will get easier for the next generation of models. Another concern when it comes to AI-generated content is that there will be AI-generated content wrong or right, in mass quantity, on the internet. This is why regulators want to shape frameworks and policies in order to prevent skewed or wrong information on the internet. This is not only necessary for us to combat polluted data sources, but also for AI training, so those wrong data don’t get re-used for training.

We are already seeing realistic pictures of events that never happened imagined by AI. So you need to be careful when incorporating any random data in your research. This is not a special warning, it’s common sense. There are more wrong data generated by humans purposely than by AI.


AI is here to stay and it will continue to evolve and improve as a tool that can help accelerate workflows and automate repetitive tasks. While there are concerns about AI-generated content, it is important to remember that AI is a tool and it requires human oversight to ensure accuracy and relevance. As long as creators and AI companies respect each other and work together, there is no reason to fear that AI will replace jobs. Rather, it can be a valuable addition to our existing tools and technologies. As with any technology, it is important to stay vigilant and monitor AI outputs to ensure that they are accurate and relevant to the intended purpose.

(There, I automated the boring part of this article, writing conclusions).

Tags: , , ,

About the Author

is a creative professional from Bangladesh. In Technofaq, Mahbub writes articles about design, privacy, technology and life surrounding them.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to Top ↑