The Doubao AI: Will the Mobile Ecosystem Be Rewritten When AI Agents Are Really Implemented?
Foreword: This storm presented us with the problems in just a few days.
In early December, an AI mobile assistant called “Doubao” suddenly brought scenarios that had long existed only in research and imagination—operating behind the scenes—to the real mobile phone, immediately clashing with the security and risk control systems of WeChat, banks, and Alibaba’s apps. In just a few days, it was as if a peephole had been opened to the future.
Below, we’ll first outline the timeline in the simplest way possible, then explain why this trend cannot be easily stopped; finally, we’ll focus on a more fundamental question: if AI agents are to truly exist and grow on mobile phones long-term, how should identity, authorization, and governance be changed? DID (Digital Identity Registry) may be the key answer.
1. What Happened Over the Past Few Days
December 1st: Doubao Assistant appeared on a Nubia engineering prototype, positioned as a technology preview version and not intended for mass production by ordinary consumers. The official statement emphasized that access to relevant operational permissions is only granted with user authorization.
Demonstration videos showed it could perform tasks in the background, such as ordering takeout, booking flights, comparing prices, and even replying to WeChat messages. In other words, operations that previously required manual intervention can now be automated across apps by the AI agent.
December 3rd: Several users discovered that using Doubao Assistant to operate WeChat resulted in a forced logout prompt, stating that the login environment was abnormal and requiring a different device to log in again. Doubao responded that the WeChat operation function had been removed, and related accounts would be gradually unblocked; WeChat stated that it may have triggered its existing security risk control mechanism.
December 4th-5th: Reports from multiple media outlets and user feedback indicated that risk control pop-ups targeting AI or screen sharing also appeared in financial apps such as Agricultural Bank of China and China Construction Bank, requiring users to disable the AI assistant before use; this was considered the first large-scale conflict between the AI agent and the platform.
December 6th: Multiple Alibaba-affiliated apps, including Taobao, Xianyu, and Damai, began refusing logins via Doubao mobile phones in tests. Even manually opening the app could trigger security mechanisms. Furthermore, games like Honor of Kings implemented AI control detection to prevent AI operations. Doubao officially announced restrictions on AI operation capabilities in certain scenarios, including score/incentive farming, financial apps, and some game scenarios, emphasizing the need for clearer rules.
This tells us that once technology touches the security and commercial boundaries of a platform, blocking and rule adjustments will quickly follow—and the bigger question isn’t how long this blocking will last, but why it happens and how it will evolve.
2. Is the integration of AI into mobile operating systems truly an unstoppable trend?
Why is this trend so difficult to truly stop?
Strong Demand for Efficiency and User Experience
The core value of AI agents lies in automation and time saving. In the past, we manually opened apps, checked prices, placed orders, replied to messages, kept track of expenses… These seemingly small actions, added up to a small waste of time each day. Doubao’s demonstration shows how these tasks run automatically in the background, freeing users from having to stare at the screen.
This demand isn’t an isolated idea from a single team, but a long-standing user pain point. As long as the technology is feasible, someone will try to make it smoother and more integrated into daily life, rather than forever remaining in the demonstration or research stage.
Multiple manufacturers and technology companies have already begun experimenting.
Reports reveal that not only Doubao, but other mobile phone manufacturers are also trying to add functions such as memory, automatic summarization, or automatic operation to AI assistants, indicating that this is the direction of the entire industry. Even if some functions encounter limitations, they haven’t stopped trying.
This shows that the industry has realized that deep integration of AI into the mobile operating system is a crucial lever for enhancing product competitiveness. Blocking it, even if effective in the short term, is unlikely to fundamentally prevent competitors or other ecosystem participants from continuing to advance. User habits and market forces will drive technology towards maturity. Once users experience the convenience of automation, they will expect more complete and secure versions. Platform blocking can only force technology developers to find new paths: such as negotiating with platforms, establishing clear authorization mechanisms, and making more transparent compliance adjustments. Blocking is like cutting off a stream, while market forces are the water; no matter how you try to stop it, it will always find new cracks to flow towards its target.
2.2 Platform risk control and blocking are only the initial counterattacks.
The restrictions imposed on WeChat, banks, and Alibaba’s apps reflect a strong commitment to security, risk control, and control over commercial access; AI automation touches upon the platform’s monetization logic and user behavior patterns.
However, even from today’s perspective, these measures are merely a short-term counterattack. In fact, Doubao has quickly adjusted its scenario restrictions and regulations, aiming to balance technological development with industry acceptance and avoid excluding legitimate user access.
This indicates that the core issue is not whether the technology exists, but rather the relationship between technology and rules, enabling and oversight. Blocking can temporarily alleviate the conflict, but it cannot eliminate the fundamental changes brought about by the technology: intelligent agents operate at the operating system level, unlike the traditional ecosystem model of individual apps.
2.3 From the perspective of operating system security, it is necessary to reshape the boundaries of governance.
The common capabilities that AI agents rely on—such as simulating touch, reading screen information, and executing tasks across apps—are inherently highly sensitive permissions that multiple platforms have long guarded against. The Doubao incident combined these sensitive permissions with AI agents, directly exposing the platform’s risk control system to reality.
This means we need to rethink:
Who has the right to decide when and within what scope these operations occur?
How can we clearly design the boundaries, responsibilities, auditing, and reversibility of these operations while ensuring user experience and efficiency?
Simply stopping at the level of blocking or simple permission control is like applying paint to the surface of water; it cannot change the fact that the water will eventually find a way out. We need to rebuild the underlying governance structure—identity, authorization, auditing, and rules.
3. DID:The core lever for hedging future conflicts
Amidst this trend and conflict, decentralized identity (DID) is not a panacea, but it has the potential to become a key infrastructure supporting future systems, AI, and platform collaboration. The following three points are the most compelling reasons why it deserves our attention.
3.1 More precise identity control and least privilege
Traditional account systems often involve one-time, long-term authorization: the same account possesses almost identical privileges across multiple platforms, making it difficult for users to exercise fine-grained control over permissions. With AI agents taking over, excessive permissions could pose significant risks, and platforms also worry about unclear user authorization interfaces.
DID (Delegated Authorization ID) offers a segmentable, restrictive, and revocable authorization method: users can issue short-term, specific-scope credentials to AI agents or services based on the scenario.
When risks increase, rules change, or the user no longer needs the function, the credentials can be immediately revoked without affecting other normal account usage.
In the Doubao incident, the platform worried that AI-assisted actions would disrupt genuine user interaction and undermine security and fairness mechanisms. With a DID-based minimum authorization mechanism, the platform could see that a particular operation genuinely came from user authorization, was within a specific scope, and was auditable, thus reducing the need for misjudgments or blocking.
3.2 Feasible Pathways for Cross-Platform Trust and Auditing
AI agents don’t operate within a single app; they may execute tasks across multiple platforms. This makes traditional identity and authorization systems cumbersome and opaque:
Users must repeatedly verify and submit information on different platforms;
Platforms struggle to determine if an action exceeds the authorized scope and to trace the boundaries of responsibility;
In case of anomalies, unilateral risk control or blocking by the platform is the only solution, rather than collaborative discovery of the root cause.
DID (Distributed Identity Authentication) offers a new solution:
Different platforms don’t need to fully trust each other or expose all account information; they only need to verify the validity and authorization scope of the user-provided credentials.
Users can clearly see when and for which task they authorized; they can also check if authorization has expired or been revoked.
Platforms can also more easily audit the source of external permissions, thereby improving risk control strategies or proposing more reasonable rules, rather than simply blocking.
This can greatly alleviate trust friction between platforms and between users and platforms in the future AI operating system ecosystem.
3.3 Laying the foundation for long-term collaboration between AI and operating systems
Imagine a more mature future: when an AI agent initiates an operation request, the system or platform verifies the scope, scenario, and purpose of the authorization based on the user’s DID credentials; then it decides whether to allow, restrict, or require additional verification.
This means:
AI will no longer execute automatically in a black box, but will be subject to verifiable, traceable, and revocable authorization policies.
Users can quickly and clearly take control of their own lives, rather than being swayed by platform over-blocking or the unpredictable attempts of technology providers.
Platforms can also more securely and confidently allow certain compliant automated functions to exist, thus avoiding a zero-tolerance approach to new technologies.
The blocking and adjustments in the Doubao incident essentially demonstrate the game between technology and rules. DID offers a forward-looking governance approach: elevating the conflict between “identity and authorization” from the platform and application levels to a more unified, transparent, and user-controllable level, thereby finding a way to coexist within the conflict.
4. Conclusion: From blocking to redefining trust
The lessons from the Doubao incident are straightforward:
AI agents integrating into operating systems cannot be simply blocked. Blocking is a reaction; the trend is the main thread.
What truly needs to be addressed in the future are identity, authorization, auditing, and rules. Without these elements, any attempt to push AI agents onto users will struggle to achieve long-term stability.
On this path, DID (Digital Identity Registry) is not a pipe dream, but a potential infrastructure. It gives users more control, platforms more boundaries, AI operations more traceability, and allows all participants to move from conflict to cooperation.
If you are interested in the future security and ecosystem development of smartphones, this is not just a battle between a particular AI phone and a particular platform, but a profound re-examination of “who can control devices and identities.” Understanding this is more important than simply praising or criticizing a technology.
The future victors may not be the first to cram AI agents into phones, nor the fastest to block them, but rather those who can find a sustainable and widely acceptable path between technology, rules, identity, and trust.