
For a while, I have been wondering what topic I should select for my maiden article on ShiRo Insights, and then I thought, what better than covering something that we have been witnessing recently at my workplace? And here I am, sharing my thoughts and experiences on the effects of artificial intelligence (AI) on technical writers. I will also try to cover how technical writers using AI have affected recruiters and interviewers for various companies and businesses.
I have been involved in reviewing and interviewing technical writers for my organization (not ShiRo Insights, the place where I am currently employed) since early 2022. Our hiring process involved candidates writing articles on topics we assigned (with a certain word count requirement) followed by an interview with me, our editor, or both. Over this period, I had come across a variety of candidates – good, not-so-good, excellent (one of these excellent candidates eventually turned out to be my girlfriend 😊) – which is not out of the ordinary. However, something out of the ordinary did happen in early 2023.
While the hiring process was going on for a batch of technical writers in the early months of 2023, I found that the sample articles of a few candidates, who were independently assigned the same topic, had written very similar articles. They were not identical nor plagiarized, but they were almost identical in terms of the structure, information mentioned, tables provided, and various other aspects. An easy explanation would have been that both candidates referred to the same source with a high likelihood of plagiarism, but that was not the case. I was confused! How come these candidates, who don’t know each other, wrote articles that are so similar but are not plagiarized? For a moment, I thought that after going through numerous texts and content pieces over the months and years, I had finally gone crazy and started perceiving patterns or connections that logically shouldn’t exist.
Just to check that I am not going insane, I asked the editor to see if he is seeing the stark similarities in the articles of these unrelated candidates. Thankfully, I was not losing my marbles; those articles were similar. Now, the question arises: if they are so similar, why is no plagiarism detected in any of them? An easy explanation would have been they have the same source, but the author has rephrased them – either themselves or by using a tool. However, I had doubts about it because both articles were very well written and if someone had the ability to rephrase such a technical article with quality and precision, the candidate would be skilled enough to write the article from scratch. Seasoned content writers and managers will know that editing or rephrasing an article authored by someone else is notably more challenging than writing the same piece from scratch.
Then it struck me. These articles must have been created using an AI tool, probably ChatGPT, given that in early 2023, ChatGPT was the hot topic and there were few alternatives. What must have happened is the candidates provided very basic and identical prompts and the AI tool provided them with very similar – yet not plagiarized – pieces of content. However, these were still speculations, very strong speculations but speculations, nonetheless. After a meeting with my editor, we decided to move forward with them to the next phase of the hiring process, which was a telephonic interview. We decided to proceed with them because we didn’t have any prejudice against AI tools; they are tools which, if used correctly, can improve the quality and efficiency of our work.
A few days later, interviews were scheduled which were conducted by me and the editor. As the first interview started and we went into the details of the sample article written by the candidate, it became blatantly clear that the candidate had little knowledge (if any) of what was written in the article submitted. The same thing happened in the subsequent interview with the other candidate as well, neither had any idea of the contents of their articles. This pretty-much provided a confirmation to my earlier speculation that they had used an AI tool to create their articles. They got rejected and we moved on to other candidates; no additional time was wasted.
However, in the coming months, in all of our subsequent hiring drives, similar patterns started appearing where the sample articles submitted by candidates were very well written but, in the interviews, they had little idea of what was written. It became abundantly clear that the candidates have increasingly started relying on AI tools to create decent-quality articles without even understanding what is written. The sample articles were no longer helping us in evaluating the candidates and getting an idea of their capabilities and skills. This started creating a hassle for us as it resulted in significant time wasted on interviews with little to no output. ChatGPT and other AI tools, which have been created to assist people with their tasks in various domains, have become a pain point for us.
I want to clarify that I am not blaming AI tools for this nuisance; the fault lies in the misuse of such great tools. AI tools have opened up a gateway that allow individuals and businesses to achieve significantly higher productivity and efficiency. There is no denying that AI tools have revolutionized the way we work, much like the invention of online search engines did. Only time will tell the full extent of AI’s impact. However, just like any other tool, they function optimally when properly used, and not when misused. And unfortunately, many people are becoming victims of misusing AI tools.
From personal experience, AI tools (ChatGPT in particular) have helped me save a tremendous amount of time and effort as I create high-quality and accurate content. They have aided in grammar checking, content optimization, keyword research and general advice on the quality of content, to name a few. They have also helped me significantly in foraying into domains in which I don’t have prior expertise. I have relied on ChatGPT for not only work but various other aspects of my daily life, from planning trips to asking silly questions and much more than I care to share publicly.
The question for a lot of people using AI tools should be, “How do I use AI tools, and not misuse them?” From my professional point of view, over-reliance on AI tools can have a real-life impact on their skills, and this impact won’t be positive. It can lead to reduced creativity and negatively impact the individual’s capacity for independent analysis, critical thinking, and problem-solving. And let me share a little secret with people who believe that they can outsmart recruiters and managers by using AI-generated content with clever prompts, as far as content writing is concerned, experienced managers often know if the content is generated using AI tools without relying on AI content detectors (sadge!).
At the time of writing this piece, it has been approximately one year since ChatGPT was launched, which was the first publicly available AI chatbot, and in only one year it has made a massive impact across sectors. As we have dashed into the era of artificial intelligence, new AI chatbots – each claiming to be better than the other – are being launched or developed. Just as it happened with the advent of the internet, we cannot escape the world of AI; we will have to learn to live with it. As we move forward, one of the most important aspects for people using AI tools will be to figure out how to use AI tools and not misuse them.
Subscribe now to keep reading and get access to the full archive.