No one tells you this, but Data Science is a game of compromise.
In school, analysis—no matter how complex—almost always had a clear path.
You just had to dig to find it.
But in real life, there’s too much randomness, too much ambiguity, and rarely enough time or resources to follow the “ideal” path.
I would argue, this is the REAL hard part of doing analysis in the real world, not writing Python or SQL, or deciding which model is best, but instead:
The pre-planning
Adjusting to constraints without losing sight of the objective
Aligning with stakeholders in a clear way
And in the past, this is where I’d get stuck.
I’d open a document, try to structure my thoughts, maybe sketch out a plan… and end up jumping straight into the code just to feel like I was making progress.
But lately, I’ve been doing something different.
Something that’s helped me think sharper, stress less, and deliver more polished work—without adding extra overhead.
In fact, I’d say this new workflow has 10x’ed my work.
And today, I want to share exactly how it works.
Context is all you need
Most people using ChatGPT ask one-shot questions and then get frustrated when it doesn’t magically fix all of their problems.
The reason it doesn’t work that well is because it lacks enough “context“.
Without it, ChatGPT becomes a guessing machine.
With it, it becomes a collaborator.
And this is what the most efficient way of doing this looks like:
This is the foundation of my workflow, and it involves giving ChatGPT real context from the beginning and then letting it help me generate drafts, ideas, or code, so that I can then take that output, refine it, and feed it back in to keep moving forward.
Let me show how it works in practice.
📣 Quick announcement
Our next monthly live Q&A (for paid subscribers) is in two weeks!
What other perks do you get as a paid subscriber?
Full access to all public + premium posts.
A free copy of my Data Science Project Ideas PDF (2025 edition).
A free copy of “Data Science Interview Case Studies (2025 Edition)“.
A 25% off coupon for a 1:1 mentoring session.
Occasional live Q&As and exclusive workshops.
Building your own context loop (+ best practices)
Start with a dedicated “Project”
This is by far the most effective way to start your own context loop, at least for now since ChatGPT is constantly advancing in functionality and capabilities.
Start a new project to organize all your chats and uploads:
This will make it easier to add background docs, stakeholder notes, links, etc. Basically, to feed it context that ChatGPT can reference easily at any time (even across chats).
💡 Lately, I’ve been using Google’s meet “Take notes for me“ feature every time I meet with stakeholders. It works great for auto-generating meeting transcripts which I can then upload to ChatGPT for extra context.
Making it part of your “Analysis Workflow”
Use ChatGPT as a soundboard for scoping
Brainstorm metrics, angles, hypotheses
Ask follow-up questions, get quick feedback
Decide on analysis approach collaboratively
💡 I love to use the “speech to text“ feature. It allows me to speak more freely and truly feel like i’m having a brainstorming session with another data scientist.
Generate a structured analysis plan
Ask GPT to write a structure analysis planning document.
Tweak slightly, share with stakeholder for alignment
Saves hours of manual document writing
Remember to feed this planning document back into your project files.
Use It throughout the analysis
Get code help for tricky joins, plots, or validations
Ask for sanity checks or edge-case ideas
Summarize the final analysis
Feed in key findings
Let ChatGPT write the first draft of the summary/report
Polish and share with others
Summary
By talking to ChatGPT—using it as a soundboard or advisor—I gain more clarity and catch potential edge cases I’d often miss, simply because I don’t have the time or energy to fully stress-test my ideas during the planning phase. This results in more robust ideas, in half the time.
It writes documentation for me. I no longer have to worry about it, and I don’t have an excuse to skip it. Documentation gets done faster, alignment with stakeholders becomes easier, and they trust my work more.
It helps me write boilerplate code (especially for things like plotting) that would otherwise take time to write from memory or look up repeatedly on Google or in the docs.
The result? A faster, clearer, and more reliable workflow.
Thank you for reading! I hope these tips help you 10x your work too.
See you next week!
- Andres
Before you go, please hit the like ❤️ button at the bottom of this email to help support me. It truly makes a difference!
Have you tried using NotebookLM from Google for that purpose? Must be more convenient
Providing information and getting correct response is a skill to learn it takes time and needs quite a lot of practice and determination.