Using AI tools like ChatGPT for content writing: what science marketers should know

AI can write fast. But can it write right? When the science has to hold up, fluency isn’t enough. Here’s what happens when you put tools like ChatGPT to the test in biotech content – and what only a human expert can catch.
AI tools are everywhere. From ChatGPT and Claude to Perplexity and Gemini, generative AI has become a go-to for drafting, summarizing, and idea generation. But not all outputs are created equal, and not all workflows can afford to get things wrong. Everyone’s taking notice of these tools, but for science marketers, the pressure is different. In fact, in biotech marketing, using AI tools carelessly can do real harm.
If you’re curious about AI but cautious about its limitations – especially when accuracy and credibility are non-negotiable – this post is for you. I’ll walk through how I’ve integrated tools like ChatGPT and Perplexity into my freelance science writing workflow and share where they’ve saved time, where they’ve stumbled, and how I use my scientific background to make sure my final content is solid.
I’ll also give you a glimpse of a project where I’ve been testing ChatGPT for content writing. It’s already shifted how I think about using AI in real-world content workflows.
What AI writing tools get right and a few ways I use them
After nearly 15 years supporting clients across biotech, pharma, life sciences, and scientific publishing, I understand how essential it is to get the science right and to earn the trust of your audience. Your content doesn’t just need to sound good – it has to be factually correct, appropriate for your intended audience, and clear enough for a global scientific readership.
When I’m writing about a new topic or facing a tight timeline (or both), AI tools can really help me get moving. I often start with Perplexity to research the topic and get a quick overview without having to dive into the literature right away. From there, I might use ChatGPT to turn my notes – or if I’m being honest, a long and rambling voice dictation – into a rough outline and then a first-pass draft.
Here are a few ways I use AI tools in my workflow:
- Topic research: Getting a quick overview of a scientific area or product, before diving into primary literature – an essential step AI doesn’t replace.
- Drafting outlines: Turning loose notes or voice dictations into structured outlines or starting drafts.
- Brainstorming angles: Exploring different ways to position a product or message for a specific audience.
- Clarifying messaging: Refining awkward sentences or exploring tone variations with ChatGPT copywriting, especially when writing for audiences outside my primary field.
Of course, they don’t get everything right, but they get me far enough that I can see what’s missing and start shaping the message with clarity.
These tools are especially helpful when the project briefing is thin, the product is unfamiliar, or I’m addressing a different market sector. That early pass helps me get oriented quickly and work more efficiently, without lowering the bar for scientific accuracy or clear communication.
They don’t replace my scientific lens, but they do help me get to the part of the project where it’s needed faster.
Why AI-generated content can put biotech brands at risk
As helpful as these tools can be, they come with serious limitations, especially in scientific and technical fields, where accuracy isn’t optional.
One of the biggest issues is what’s called “hallucination.” AI models like ChatGPT and Perplexity can produce text that’s grammatically polished and stylistically impressive, but factually incorrect. Over and over, I have seen them cite papers that simply don’t exist and misrepresent the findings in those that do. They’ll describe workflows that sound plausible but don’t match reality and assign features to a product that it simply doesn’t have.
My first experience with AI hallucinations
I was writing about a signaling pathway dysfunction, and the associated disease was a bit gross. So I asked ChatGPT for a more compelling disease example linked to the pathway. It delivered exactly what I requested, complete with a detailed citation that looked legitimate – real researchers who work in that field, a reputable journal, even a DOI. But the paper simply didn’t exist. The entire citation was fabricated, and the supposed connection between the disease and the pathway was pure fiction.
Because the writing sounds fluent and authoritative, it’s easy to forget what these tools really are, and to mistake that polish for true understanding when it’s just an illusion. Remember: what AI tools are actually doing is assembling language based on patterns. They are not verifying facts or grasping meaning the way a human expert would.
That’s where the danger lies: not in glaring errors, but in content that sounds credible and can easily slip past someone without the scientific background – and the hours – to rigorously fact-check every claim.
These aren’t theoretical concerns. I’ve worked on projects where the AI-generated version looked impeccable on the surface, but if I hadn’t double-checked every detail, I would have delivered flawed content to a client who counts on me to deliver content that’s both correct and credible.
And that’s the core problem: AI can write with confidence, but it doesn’t know what it’s saying. That’s still our job.
Can AI replace content writers? Not in scientific marketing
Why context, nuance, and human judgment still matter
AI can mimic language, but it can’t replace scientific insight. That’s why human expertise is still essential, especially in research content, where precision and trust are non-negotiable.
For example, AI tools don’t know which workflow steps actually make or break a laboratory protocol. They can’t weigh which research findings matter most to a product manager to determine a product USP. They can mimic tone, but they don’t know when to use which one or why it matters. They generate content based on patterns, not on real-world experience, regulatory understanding, or audience needs.
Scientific communication often hinges on nuance: what you say, what you leave out, and how you position information for different stakeholders. That’s not something a model can intuit – it’s something a human learns through experience.
That includes ethical framing – knowing how to position data responsibly, avoid overpromising, and communicate clearly in regulated spaces. It also includes the ability to bring in real, properly contextualized scientific references to support a message without distorting it. Credibility doesn’t just come from saying the right things, but from showing your audience that you understand what matters to them.
What a science writer can do that AI can’t
When I review content – whether it’s been drafted by a person or a tool – I can spot what’s off almost immediately. I catch what’s missing, misaligned, or just slightly wrong in ways a generalist wouldn’t notice. This is the benefit of combining a research background with years of writing for scientific audiences.
My process includes real fact-checking: checking sources, tracing claims to the original literature, and making sure every reference says what the content implies it does. It also includes reframing content for a global audience, clarifying complex information without losing scientific rigor, and adjusting tone to resonate with end users.
I know when to flag an overreach, when to push back on a vague claim, and when a product benefit needs clearer framing for skeptical readers. I don’t just check the science – I help shape it into content that earns credibility. That’s not about sounding smart. It’s about helping my clients get it right and stay trustworthy.
What makes the difference is bringing context and judgment to every project:
- Tracing claims back to the original literature, making sure every reference says what the content implies
- Clarifying complexity for a global, often non-native English audience, without losing scientific rigor
- Adjusting tone and emphasis to suit the format – white paper, web copy, or application note
- Spotting gaps, overreaches, or subtle inaccuracies that AI and generalist writers might miss
- Positioning data responsibly to avoid hype, ensure regulatory compliance, and earn audience trust
Testing ChatGPT for content writing in a real-world project
I commit to the same high standards in my own work, including a personal project where I’m exploring what AI can and can’t do in real-world content creation. I’m launching a website on a topic I’m passionate about and using it as a testing ground for AI-assisted blog writing. By building and refining custom GPTs, I’ve been able to generate content much more efficiently.
They can produce strong-sounding content in my writing style fast – but sounding strong isn’t the same as getting it right. In my topic area, precision is non-negotiable, and every message needs to hit the right tone, intent, and scientific rigor. Even with detailed prompts, curated background, and a strong knowledge base, it still takes hands-on time, oversight, and real judgment to make sure the content meets my high standard. Quality doesn’t just appear – it has to be shaped.
Why the future of science content is a partnership with AI
AI writing tools aren’t going away, and when used well, they let us spend less time getting started – leaving more time refining what matters and working where our expertise has the greatest impact.
When it comes to AI writing vs. human writing, the difference often lies in context, clarity, and trust. The strongest content today doesn’t come from AI or from humans alone. It comes from collaboration between the two. Tools like ChatGPT and Perplexity can speed up the early stages, but it still takes human expertise to make sure the science holds up, the message resonates, and the final product builds trust.
That’s where the opportunity lies: not in replacing human insight, but in reinforcing it. For science marketers, that means pairing the speed of AI with the standards that the industry demands – because in science communication, close enough just isn’t good enough.
Curious how this looks in practice? Wondering if AI tools could fit into your content workflow or where human expertise still makes all the difference?
Contact me and let’s talk about what works, what doesn’t, and what will set your content apart.
AI-generated image (DALL·E 3 via ChatGPT), modified by the author.