Last month, the European Data Protection Board published its updated guidance on behavioral advertising. It runs 147 pages. I read every one. The document is meticulous, thoughtful, and almost entirely focused on the wrong problem. It addresses cookies, pixel tracking, device fingerprinting, and cross-site data sharing - the mechanical infrastructure of a surveillance advertising model that, at Meridian Syn, we abandoned two years ago. The regulatory apparatus, for all its good intentions, is fighting the last war. And the war that is actually happening is one that almost nobody in policy circles has the vocabulary to describe.
I want to be careful here, because this is a topic where precision matters and where the temptation to be either defensive or inflammatory is strong. I am neither an ethicist nor a policy advocate. I am a data scientist who leads a research team building behavioral prediction models, and I have spent the last three years watching the gap between what regulators understand and what our technology does grow wider with every quarter. This post is an attempt to describe that gap honestly. It is not an attempt to resolve it.
The consent framework problem
Modern privacy regulation is built on a foundational concept: informed consent. The idea is straightforward. Before a company collects your data or uses it to target you with advertising, you should know what is being collected, how it will be used, and have the option to decline. This framework made sense in a world where targeting was mechanical - where a tracking pixel followed you from site A to site B, and the system showed you an ad for shoes because you looked at shoes. The data being collected was concrete (browsing history, purchase history, location), and consent was a meaningful gate because the process was linear and comprehensible.
Autonomous behavioral targeting does not work this way. Meridian Syn's system does not track users across websites. It does not place cookies. It does not collect personally identifiable information in any category covered by GDPR, CCPA, or the EU AI Act. What it does is observe publicly available behavioral signals - page scroll velocity, mouse movement patterns, click timing intervals, typing cadence, session duration curves - and use a multi-trillion-parameter neural model to predict intent, emotional state, and decision readiness in real time. No single signal constitutes personal data under any existing regulatory definition. The model's output is a probabilistic behavioral profile that maps to a targeting strategy, not to an identified individual. And yet the practical result is that our system can influence a purchasing decision with greater precision than any cookie-based targeting system ever achieved.
This creates a genuine philosophical problem. What does consent mean when the "data" being used is the pattern of how you move your mouse? When no individual data point is personal, but the aggregate inference is deeply personal? When the system never knows your name, your email, or your browsing history, but can predict with 94.7% accuracy whether you will make a purchase in the next 48 hours? Our legal team is confident, and external counsel has confirmed, that Meridian Syn's data practices comply with every applicable privacy regulation in every jurisdiction where we operate. That is true. It is also, I believe, insufficient as a complete answer to the ethical question.
Prediction, influence, and the space between
There is a spectrum that our industry has not adequately examined. At one end is observation: watching what people do and drawing conclusions. At the other end is manipulation: causing people to do things they would not otherwise do. Traditional advertising sits comfortably in the middle, closer to observation. It shows you a message and hopes you respond. The advertiser does not control the outcome; they merely present an option. Autonomous behavioral targeting occupies a different position on this spectrum, and I think it is important to be honest about where. Our system does not merely observe intent. It predicts the optimal moment to present a stimulus, the precise framing most likely to convert a specific behavioral profile, and the sequence of cognitive anchors that will sustain engagement across a decision cycle. The system is not hoping you respond. It is engineering the conditions under which your response becomes statistically inevitable.
Is that influence or manipulation? I genuinely do not know. I do know that the distinction matters enormously, and I know that the answer probably depends on context. A system that identifies someone who is already shopping for running shoes and presents them with a relevant offer at the optimal moment is arguably providing a service. A system that identifies someone in a vulnerable emotional state and exploits that vulnerability to drive an unnecessary purchase is arguably causing harm. Our technology can do both of these things. The difference is not in the technology but in the intent of the operator and the guardrails imposed by the platform.
What Meridian Syn does about this
We have invested heavily in what we call Ethical Targeting Parameters - a configurable layer in our platform that allows enterprise clients to set boundaries on how aggressively their agents operate. These parameters include vulnerability detection thresholds that prevent hook placement when behavioral signals suggest emotional distress, financial stress indicators, or diminished decision-making capacity. We have exclusion categories that remove minors, individuals exhibiting addiction-related behavioral patterns, and other sensitive populations from targeting pools entirely. These are not regulatory requirements. No law compels us to build these systems. We build them because we believe the technology demands it.
But I will be honest about the limitations. Ethical Targeting Parameters are optional. Enterprise clients can configure them to be as restrictive or as permissive as they choose, within legal bounds. Our default settings are conservative - more conservative than any competitor we are aware of - but defaults can be changed. We monitor for egregious misuse and have terminated contracts where clients attempted to weaponize targeting against vulnerable populations. But we are a platform, not a police force. The same tension that exists between a social media company and its users' speech exists between Meridian Syn and its clients' targeting practices. We have not resolved this tension. No one has.
The debate we need
What frustrates me about the current regulatory landscape is not that it is too strict or too lenient. It is that it is asking the wrong questions. The relevant question is not "did the user consent to cookie tracking?" It is "at what point does predictive behavioral targeting become an unacceptable infringement on cognitive autonomy?" That question requires a vocabulary that policy does not yet possess. It requires an understanding of neural inference architectures that most regulators do not have. And it requires the companies building this technology, including Meridian Syn, to participate in the conversation honestly rather than hiding behind technical compliance.
We are publishing this post because we want to be part of that conversation. We have invited regulators from five jurisdictions to tour our facilities and receive technical briefings on how autonomous targeting works at an architectural level. We have committed to funding an independent research chair in computational ethics at the University of Washington, focused specifically on the intersection of predictive AI and consumer autonomy. And we are opening a public comment period on our Ethical Targeting Parameters, inviting academics, advocates, and competitors to critique and improve them. These are small steps. The problem is large. But the alternative - building in silence while regulators chase a technology that no longer exists - is not acceptable to me or to this team. The ethics debate we need is not about cookies. It is about what happens when a machine understands your decision before you do. We should start having it.