top of page

AI Policy Is Not Just a Large-Company Problem. Here Is Why Every Organization Needs to Be Watching.

  • Writer: Christopher Hanes
    Christopher Hanes
  • 4 days ago
  • 5 min read

Let us start with the data, because it challenges a narrative that has not aged well.


As recently as early 2024, large businesses used AI at nearly twice the rate of small businesses, 11.1% compared to 6.3%. By August 2025, that gap had shrunk dramatically, with small business usage reaching 8.8% while large business adoption actually declined slightly to 10.5%. A 2025 national survey found that 76% of small businesses are either actively using or exploring AI tools. Mid-size organizations are not sitting on the sidelines. In most cases, they are already in the game.


Which is exactly why the policy conversation matters for them now.


There has long been an assumption, understandable if increasingly inaccurate, that regulatory compliance and policy monitoring are primarily the domain of large, public, or heavily regulated organizations. Companies with legal departments, compliance officers, and audit committees staffed precisely to track what governments and regulators are doing and to prepare accordingly. Mid-size companies, the thinking goes, move on instinct and execution. Policy is someone else's problem.


That assumption has always been a simplification. In the AI era, it is a liability.



The Regulatory Floor Applies Regardless of Your Headcount


The EU AI Act's scope does not distinguish between large and small companies. This is not a footnote. It is the foundational principle of the regulation. If your organization uses AI to screen job candidates, score customers, automate document processing, or run a client-facing chatbot, you have obligations under the most significant AI regulatory framework yet enacted, whether you have 50 employees or 50,000.


Prohibited AI practices and AI literacy obligations entered into application in February 2025, with governance rules for general-purpose AI models applying from August 2025, and full application of remaining obligations on August 2, 2026. Article 4 of the Act, which requires that all people who work with AI systems in an organization have a sufficient level of knowledge about how those systems work, their limitations, risks, and opportunities, has been in force since February 2025. That obligation is active today, and it applies to every organization deploying AI, not just the ones with dedicated compliance infrastructure.


A political agreement reached on May 7, 2026 will simplify certain provisions and extend some deadlines. Companies now have until December 2026 for certain transparency obligations and potentially until the end of 2027 for some high-risk system requirements. But as legal experts have noted consistently, the extension of certain deadlines should not be interpreted as an invitation to pause AI governance efforts. The AI Act is already in force, and organizations remain expected to prepare for compliance now.



The Situation Is the Same. The Infrastructure to Respond to It Is Not.


What genuinely distinguishes large organizations from mid-size ones in this environment is not the regulatory exposure. It is the depth of the apparatus they have traditionally deployed to monitor and manage it. Public companies have investor relations, audit committees, and external auditors asking pointed questions. Heavily regulated industries have compliance functions built expressly for this purpose. These organizations have developed the institutional muscle to track policy developments as a matter of routine.


Most mid-size organizations have not, not because they are less capable, but because the regulatory environment historically has not required it of them at the same pace or intensity. The AI policy moment changes that equation. The relevant question is not whether the rules apply. It is whether your organization has the awareness and infrastructure to respond to them appropriately.


Regulatory fines related to AI misuse reached $2.1 billion globally in 2025, a sevenfold increase from 2023. And the reputational damage from a public enforcement action can be just as harmful as the fine itself. The consequences are not reserved for organizations above a certain revenue threshold.



International Policy Is a Domestic Business Reality


One of the more durable misconceptions about international regulatory frameworks is that they are primarily the concern of multinationals with operations in multiple jurisdictions. The EU AI Act demonstrates why that view is outdated.


If your organization serves clients in Europe, uses AI tools built by European vendors, processes data governed by European law, or competes with companies operating under European regulatory frameworks, the EU AI Act affects your competitive environment and, in many cases, your direct obligations. The regulation was written with organizations of every size in mind. In fact, small and medium-sized enterprises are mentioned 38 times in the Act, compared to 7 mentions of industry and 11 mentions of civil society.


Beyond Europe, the pattern is proliferating. The US has no single federal AI law, but sector regulators in financial services, healthcare, and employment are increasingly treating existing legal obligations as fully applicable to AI systems. State-level legislation is advancing across dozens of jurisdictions. Courts are setting liability precedent in real time. The policy surface area for any organization using AI is larger, and growing faster, than most mid-size leadership teams have mapped.



What This Requires Is Not a Compliance Team. It Is a Leadership Orientation.


Large public organizations watch policy closely because their governance structures require it. Mid-size organizations need to develop that same orientation, not because it is required by a board charter, but because the AI landscape is moving fast enough that leaders who treat regulatory developments as background noise will find themselves reacting to mandates rather than preparing for them.


The organizations ahead of this right now have done something simple. They assigned ownership. Someone at the leadership level is responsible for monitoring material AI regulatory developments and bringing them into the conversation. That is not a full-time job in most mid-size organizations. It is a posture, a deliberate decision to treat policy literacy as a leadership competency rather than a legal afterthought.


The AI policy environment will continue to shift. The organizations that have built the awareness and governance infrastructure to move with it, at any size, will have more options than those that built neither.



Sources

SBA Office of Advocacy, Research Spotlight: AI in Business, Small Firms Closing In. September 2025.


Reimagine Main Street and Public Private Strategies Institute, AI and Small Business Survey. June 2025.


EU AI Act, Official Text and Implementation Timeline. European Commission. August 2024, updated through May 2026.


European Commission, Digital Omnibus on AI, Political Agreement. May 7, 2026.

Dastra, Simpler, Safer, Stricter Where It Counts: Inside the EU AI Omnibus Deal. May 2026.


Medha Cloud, AI Adoption Statistics for 2026. March 2026.


Accountancy Europe, The EU AI Act: A Guide for SME Accountants. February 2025.


IJONIS, EU AI Act for SMEs: Obligations, Deadlines, and Checklist. March 2026.



Aperture Consulting is not a law firm and does not provide legal advice. We provide informational and advisory services, tracking the evolving AI landscape across regulatory, policy, and operational dimensions and translating it into clear, actionable context that helps leadership teams make sound strategic business decisions.

Comments


bottom of page