
Whatever conversation AI is a part of seems like a platitude. AEO or Answer Engine Optimization has become a part of this.
Mostly, it’s people talking about how to hack your way there, and spoilers, it basically mentions.
So if the question you are looking for is:
How do I rank on LLMs and AEO?
The answer is clear: You get mentioned multiple times by different pages. You create a list for the service you’re selling and distribute that content across domains. The most recent mentions win!
- You must be mentioned on multiple content pieces multiple times, i.e., Reddit, LinkedIn, websites, Substack, Medium, etc.
- The content must be recent; the freshness directly impacts ranking scores.
And if your organization is home to actors that don’t deliver on their promise, but you do! Well, that’s a clear advantage.
If you think this idea has no merit, there is active research by SparkToro in this field.
That’s it.
Mentions.
So this is for all of you who came here looking for that specific answer. If you think it sounds like hacking. It is hacking. And the argument is that if AI models are unethical to begin with, hacking them is not that bad.
Plus, brands do need visibility to survive. So take your chance. There are other methods, of course. But this is the one that has emerged as a known one. For now, it’s the only one most people know.
Moving to the second part, let’s probe into something that will stagger us for time to come…maybe.
What is happening to search?
Are answer engines the future of search?
Don’t you find it a bit annoying that AI machines or algorithms choose what you should look at?
And they do it better than you could ever do! After all, who is going to sort out what’s good and what isn’t- most social media websites do it for us.
Instagram shows you content that hooks, YouTube is an endless feed of information and knowledge, and LinkedIn is the business opportunity right around the corner.
And Google? It’s a search. It is the grand library. The biggest marketplace in human history. From it, everything else branches. There are, on average, 5 trillion searches on Google annually.
That’s a lot.
But LLMs will change that. And if it won’t be ChatGPT, Google’s own AI overview and the AI mode might. Why do organizations push this so aggressively?
Haven’t they thought that AI might not be suited to search, or do they want to promote bad habits like not reaching the depth of an argument?
Perhaps neither of these; they feel AI is the natural evolution of computing. A superhuman way of thinking.
But for now it isn’t. It’s in its nascent stage, prone to mistakes and hallucinations. And easy manipulations.
The real question is: are AI engine searches better than algorithmic ones?
- What is the goal of organizations, and what role does knowledge play here?
That is the question everyone must try to answer. It seems, increasingly, we play to the demands of the market- revenue, the driving force of commerce, has become less about dispersion and mobility of wealth and more about hoarding it in the hands of shareholders.
Digression aside, we cannot ignore the fact that socio-economic factors play a huge role in decision-making.
If answer engines provide more revenue, future or present, then they will become a mainstay. If they aren’t, then there’s a chance they will be replaced with something monetizable.
What do the Answer Engines offer?
LLMs are unique; you get to have a conversation and probe into the truth of your matter deeply. Whether these are technical questions, philosophical, or knowledge-based, they help people reach a conclusion through dialogue.
Something that blogs and other content cannot do- how can you talk to this piece? It may generate ideas for self-reflection, but it cannot talk back to you. It will remain static.
LLMs bypass this and engage. Quite naturally. Yes, people love passive consumption, but this is the perfect mix of passive and active.
Active enough that people will have to think, and adequately passive that some answers will be fed to you directly without thought.
But there is depth.
However, you may cite MIT’s research and say AI makes people stupid. But that’s with every tool in the world- can we state the internet made books obsolete? No. But it gave us new ways to work with.
Do AI and, by extension, the answer engine do the same? Not just integrations, mind you. But a new and radical way of working- a collaboration that requires pattern-making.
Answer Engines Change Knowledge
How do you apply knowledge? Maybe you’re a software programmer or a mechanical engineer, working with robots and code- somewhat concrete philosophies.
But what about marketing leaders and salespeople or CEOs- how do you recognize the patterns inside your own knowledge and use it to make a decision?
This requires pattern recognition and practice. However, what if the AI gave you the answer based on the knowledge it’s fed, and it’s right 100% of the time? What would that mean for the retention of knowledge and its use?
That is the crux of all questions. But it assumes-
- The data fed is accurate
- The answer is accurate
- It knows the context
Are the AI machines of today capable of this, and will they be tomorrow?
That depends on the person operating it. For example, let’s look at the stock market.
Why do stock values change? It is based on the perception of the market. On the opinions of the people investing in the given company.
And they can be wrong. Now, imagine an answer machine that can accurately predict where to invest. And it’s right.
You use it again, and it’s right again. You’re a millionaire now. You do it again. Multi-millionaire. You do it again, and this time invest all of it to become a billionaire. And lose it all. The machine was right just 99% of the time.
Whom do you blame? It’s the person who made the bets.
This is the inherent problem everyone will face: what to trust?
How can businesses leverage answer engines and LLMs for growth?
This question is misleading. Sorry.
For employees, managers, and founders reading this, here’s a strategy. But not the one you’re thinking.
Businesses, especially the medium and smaller ones, are facing difficulties selling. And everyone’s asking: is this a marketing problem? Must we invest in advertisements, SEO, and content?
Of course, you should, but there’s a deeper problem in the market. The buyers have lost trust in your processes. They have come to believe you are after their revenue.
But, you’re asking, what does that have to do with AEO and LLMs?
Everything.
It’s the knowledge that your teams are using to create their processes and ideas. However, AI and its implications have disrupted that. Organizations are chasing an unknown technology (one that is changing 24/7) at the cost of knowledge flow, which happens internally.
Instead of listening to the sales team’s valid concern, a leader may choose to listen to an LLM’s answer or pair with an agency that uses malpractice to rank, ending in catastrophe. Here, too, the question is of trust.
But we put trust in data. Yet it’s not data that brings revenue; it is other businesses and people. And that bridge can only be overcome when they are heard. And their voice is heard through the people working on the ground.
This is something Google hasn’t been able to replace either. It has become a metric of trust, yes. But not the entire picture.
The entire picture starts when they work with you and realize that you have made empty promises that you cannot keep. They will understand that what you were after was not solving a problem but getting money out of the deal with subpar work.
And AI will fuel this- marketing messages already sound the same. And OpenAI, the pioneers of this system, are not trying to create an answer machine but an evolution of the Operating System. One that does everything by mere commands.
As a business, you can and maybe even should hack these systems. But there is a blind spot that organizations overlook- the knowledge shared and cultivated by internal teams. And they matter, especially when an organization is growing.
A leader without this sensitivity won’t adapt to what’s coming next.
The next generation of knowledge will come from a lived experience and experimentation. And that requires some risk, we know, that’s not an easy thing to digest when real bills have to be paid. But AI has shifted knowledge-work to trust-based and experiment-based work.
Leaders must lean into this instead of trying to create an LLM clone and hack the AEO process.






