From privacy by design to AI by design: the new development standard for software companies

17 min read
Dutch version not available

New design paradigm: AI by Design is the logical successor to Privacy by Design and Security by Design. For software companies and CIOs, this isn't a future trend but a current necessity to remain innovative without being slowed down by compliance later.

From adding afterwards to building in from the start

In recent years, Privacy by Design and Security by Design have evolved into core principles in software and system development. These principles mean that privacy and security are not added afterwards as a "bolt on," but are built in from the initial design.

This proactive approach is even enshrined in legislation - think of the GDPR, which requires that privacy protection be standard in system architecture. Large organizations have quickly responded: they implemented privacy-by-design processes to avoid hefty GDPR fines. Organizations that don't do this risk substantial fines based on annual turnover.

However, we're now seeing a new dimension emerge: "AI by Design". With the explosive growth of artificial intelligence in products and services, it's becoming just as crucial to incorporate AI-related aspects from the design phase.

Privacy & Security by Design as foundation

Privacy by Design (PbD) and Security by Design form the basis of development processes where compliance with privacy and security requirements is central.

With Privacy by Design, every design decision takes into account data minimization, consent, and data protection. The idea is that privacy is "built in, not bolted on," so user information is automatically well protected. This principle isn't just best practice; in the EU, it's been effectively mandatory since the GDPR to ensure privacy from the outset.

Security by Design works similarly: at every step in software development, security measures and threat modeling are incorporated to prevent security from becoming an afterthought. This "by design" thinking essentially means considering relevant legislation during the design phase.

The result of by design thinking

Thanks to this approach, products are not only safer and more privacy-friendly but also more robust in terms of compliance from day one. The result is that compliance teams - supported by management - have now given privacy and security a permanent place in the development process.

The emergence of AI by Design

Now that AI systems are becoming commonplace in software products, a similar approach is needed for artificial intelligence. AI by Design (some refer to it as Responsible AI by Design or Trustworthy AI by Design) means actively considering AI-specific risks, ethics, and regulations when designing systems.

Just as Privacy by Design is about "built-in" privacy, AI by Design is about built-in AI accountability. This includes aspects such as:

  • Transparency of algorithms
  • Fairness (equal treatment of user groups)
  • Explainability of AI decisions
  • Prevention of bias and discrimination

Crucially, these issues must be addressed proactively, not after an AI system exhibits unwanted behavior.

The role of regulation

An important driver behind AI by Design is upcoming regulation. The EU is working on the AI Act, which will require organizations to take a risk-based approach to AI applications - effectively "Safe AI by Design" as a norm.

Although the term may be named differently in the law, it comes down to AI systems being compliant with safety and ethical requirements from the design stage. This builds on the idea that we must develop AI safely, ethically, and reliably before we deploy it on a large scale.

Best practice from tech giants: Large tech companies are already anticipating this. For example, Cisco combines Security by Design, Privacy by Design, and Human Rights by Design to ensure their AI products are trusted and responsible from the outset. This integrated approach helps align AI with both business values and external standards.

Increasing complexity requires early integration

Product development in 2025 is more complex than ever. Where we previously "only" had to pay attention to privacy and security, AI ethics and safety are now added. This means multiple overlapping compliance domains.

Convergence of compliance requirements

Interestingly, many of the requirements overlap in content. Both privacy rules and AI ethical guidelines require some form of transparency and documentation.

Recent analyses show that compliance requirements in areas such as Privacy, Security, (Cyber)Resilience, Health & Safety, Intellectual Property, general regulation, and AI largely converge - and that good documentation and quality processes are the common key to meeting all these requirements.

In other words: if an organization ensures high-quality information provision, transparency, and safeguards in design documents, it kills multiple birds with one stone. Such an investment pays off in better, more reliable products and lower development and maintenance costs.

The cumulative impact of new regulation

In addition to GDPR (privacy) and NIS2/Cybersecurity Act (security), we'll soon have the EU AI Act and the EU Cyber Resilience Act. All these rules sometimes apply simultaneously to one product. The cumulative impact is significant.

RegulationDomainImpact on AI Systems
GDPRPrivacyData minimization, consent, transparency
NIS2SecurityCybersecurity measures, incident response
EU AI ActAI SafetyRisk assessments, human oversight, transparency
Cyber Resilience ActProduct SecuritySecurity by design, vulnerability management

Those who integrate these requirements early in the process can cleverly handle this overlap. Those who don't risk an avalanche of compliance issues right before release.

Moreover, certain AI functions that are now randomly built in may simply not pass inspection later. The complexity is therefore higher, but an integrated approach from the design stage can make that complexity manageable.

Prevent a compliance bottleneck: start at the design phase

When privacy, security, and AI aspects are addressed late in a project, a bottleneck often occurs. The product is then largely finished but doesn't comply with all rules or ethical standards - resulting in expensive redesign rounds or delays.

The solution is to see compliance not as an obstacle at the end but as a prerequisite from the beginning. As learned with Privacy by Design: build it in from day one, so you don't have to "bolt it on" later. This applies equally to AI.

Practical example: Amazon's AI recruitment debacle

A few years ago, Amazon developed an AI system to screen resumes. Only after some time did they discover that this AI systematically disadvantaged women when evaluating candidates.

Why? The model was trained on historical data full of male candidates and had "learned" that male applicants were preferred. Despite attempts to remove the bias, risks remained that the model would find other discriminatory patterns. Ultimately, Amazon had to scrap the project - a costly lesson in AI governance.

The lesson from Amazon's experience

This case shows that bias and ethical problems in AI must be addressed in the design and training phase, otherwise it may be too late later. An AI system that isn't fair, explainable, and safe from the outset can prove unusable in practice or lead to reputational damage and legal problems.

Example: generative AI chatbots

Another example is the rise of generative AI like chatbots. A company that decides to integrate an AI chatbot into its product must immediately think about questions such as:

  • How do we prevent the bot from giving inappropriate or misleading answers?
  • How do we protect user data that the bot processes?
  • What safeguards are there for transparency and explainability?

Without early measures (such as filters, human-in-the-loop controls, logging for audit), such a feature may be blocked by compliance teams at launch or generate negative publicity.

By properly framing AI from the design stage, you prevent delivery from stalling because the legal department intervenes at the last moment.

Best practices: how to apply AI by Design?

For software companies and CIOs who want to embrace AI by Design, there are concrete steps and best practices:

1. Integrate AI governance into business policy

Establish clear principles for responsible AI (for example, around transparency, fairness, accountability) and ensure they're as binding as other quality guidelines.

Some organizations establish an AI ethics board or committee that monitors from the design phase. Such a framework helps teams check with every decision: "Is this in line with our AI principles and values?"

Align by Design: Forrester Research calls this Align by Design, where AI development is aligned with business goals and values, and proactively ensures that AI causes no harm. This means, among other things, that alignment must be proactive, embedded in the design, and continuously monitored.

2. Conduct early risk and impact analyses

Just as you conduct a privacy impact assessment for new projects, you should conduct an AI Impact Assessment in the design phase. This maps out potential risks:

  • Bias in training data
  • Possible impact on user rights
  • AI model safety
  • Ethical implications of decisions

The Dutch government, for example, offers an AI Impact Assessment toolkit to support "responsible AI by design." By early testing whether a proposed AI application is proportional, necessary, and legitimate, you prevent building something that later won't pass ethical or legal scrutiny.

3. Don't forget Privacy & Security by Design

AI by Design comes on top of - not instead of - existing Privacy/Security by Design principles. Ensure data governance is in order: data quality, minimization, and consent remain crucial.

Integrated approach

AI systems often need large amounts of data; treat it with the same care as any other application. Also think about AI security: models themselves can be attacked (e.g., via adversarial examples or model leaks), so involve your CISO and cybersecurity team in designing AI functionality.

The bottom line remains that an AI system shouldn't create a gap in your security walls or privacy protection.

4. Ensure documentation and transparency

If it's not documented, it doesn't exist. Keep track from the beginning of how the AI was built and trained. Document datasets, algorithms/models used, and decision criteria.

This may seem like extra work, but it's invaluable for both internal understanding and external accountability. Moreover, upcoming regulations likely require this explicitly (the EU AI Act requires technical documentation and explanation of high-risk AI systems).

Practical tool: Model Card or AI Bill of Materials

A practical tool is creating a sort of "Model Card" or AI Bill of Materials (comparable to a Software Bill of Materials). In this, you note all components:

  • Data collections and their origin
  • Model versions and architecture
  • Algorithm parameters and hyperparameters
  • Training methodology
  • Validation and test results
  • Known limitations and bias

Such an approach not only increases transparency to auditors and regulators but also helps internally with quality control. Studies show that high-quality documentation and transparency enable an organization to more efficiently meet diverse compliance requirements while making better products.

5. Training and culture

AI by Design requires a multidisciplinary approach. Developers, data scientists, lawyers, ethicists - they all need to collaborate. Invest in team training on AI ethics, bias awareness, and regulation.

Encourage a culture where people dare to raise problems early. For example: a data scientist who notices that a dataset contains skewed proportions should feel compelled to report and solve this immediately, rather than thinking "we'll fix it later."

Kaizen principle for AI

A continuous improvement mentality (like the Japanese Kaizen principle) can help: keep iteratively improving and learning during the development process. This makes quality everyone's responsibility, from the development floor to the C-suite.

6. Continuous monitoring and adjustment

The work doesn't stop after the first release. AI systems continue to learn and change (or their environment changes), so monitor live behavior and performance. Build feedback loops to detect and correct misalignment or deviations.

Forrester emphasizes that continuous monitoring must be an inherent part of AI system design. Concretely, this means:

Establish measurement points for, for example:

  • Decision-making bias
  • Accuracy across different user groups
  • Performance degradation
  • Compliance with established standards

Use these metrics to periodically evaluate whether your AI is still doing what it should, in line with your values and the law.

Assign responsible parties who can intervene when a deviation is detected - whether that means retraining the model, adjusting parameters, or in extreme cases disabling the function.

This human-in-the-loop where necessary, combined with automation for monitoring, ensures that AI doesn't become an unmanaged source of risk.

Practical implementation roadmap

1

Phase 1: Assessment (Month 1-2)

Inventory current AI applications and plans. Conduct gap analysis against AI by Design principles. Identify quick wins and critical risks.

2

Phase 2: Framework Development (Month 3-4)

Develop AI governance policy. Create AI Impact Assessment template. Train core team in AI ethics and compliance.

3

Phase 3: Implementation (Month 5-6)

Integrate AI by Design into development lifecycle. Implement monitoring and documentation tools. Pilot with first AI project.

4

Phase 4: Scaling (Month 7-12)

Roll out to all development teams. Automate compliance checks where possible. Establish continuous improvement cycle.

5

Phase 5: Optimization (Month 12+)

Refine processes based on experience. Benchmark against industry standards. Build AI governance as competitive advantage.

6

Continuous: Monitoring

Real-time performance tracking. Regular audits and reviews. Proactive adjustment to new regulation.

The business case for AI by Design

AI by Design is not just a compliance necessity but also delivers direct business value:

Risk reduction and cost savings

Organizations with proactive AI governance report 40% fewer complaints about algorithmic decisions compared to reactive governance models. This translates into:

  • Increased trust from customers and regulators
  • Faster approval of new AI applications
  • Lower compliance costs by preventing costly corrections afterwards
  • Avoiding reputational damage and fines
Cost ItemReactive ApproachAI by Design
Redesign costsHigh (30-50% extra budget)Low (5-10% extra upfront)
Time-to-market delays3-6 months averageMinimal
Incident response costs€50K-500K per incidentPreventively addressed
Reputational damageUnpredictable, potentially severeStrongly mitigated

Faster time-to-market

Organizations with mature governance practices achieve 25% faster time-to-market for new AI applications because:

  • Compliance checks are automated
  • There are no last-minute surprises
  • Stakeholder buy-in is obtained early
  • Technical debt is prevented

Competitive advantage and trust

In a time of increasing concerns about AI (from bias to privacy risks), companies that embrace "AI by Design" will be more agile and credible. This delivers concrete advantages:

  • 60% higher stakeholder trust scores in independent assessments
  • Better position in tenders and enterprise sales
  • Attractiveness for talent that values ethical AI
  • Positive differentiation in marketing and PR

Strategic advantage: Organizations that proactively invest in transparency and responsible AI build trust with customers and users because they can demonstrate that their product handles AI responsibly from the outset. In the future, this will provide a significant competitive advantage.

Conclusion: AI by Design as the new standard

"AI by Design" is the new reality for modern software development. Just as Privacy by Design and Security by Design are now established principles, AI by Design will become so as well.

For software companies and CIOs, this isn't a luxury but a necessity: it's the way to be innovative without being slowed down by compliance later. By incorporating AI from the design phase - ethically, legally, and technically - you prevent your product from being delayed or having to be modified to comply with regulations right before the finish line.

Three core messages for organizations

1. Start now, even if regulation isn't complete yet

The EU AI Act is coming into force gradually, but waiting until all details are known is not an option. Organizations that start with AI by Design now have a head start and can make the transition gradually.

2. See it as an investment, not a cost

AI by Design requires initial investment in time, tools, and training. But this investment pays off in lower compliance costs, faster time-to-market, and reputational benefits. It's not overhead but a strategic investment.

3. Make it multidisciplinary

AI by Design cannot be the responsibility of only the IT department or only legal. It requires collaboration between development, legal, compliance, security, and business stakeholders. Create the structures and culture to facilitate this collaboration.

The future of responsible AI development

AI by Design doesn't mean you have to have all the answers upfront. It does mean that you ask the right questions from the beginning and build in mechanisms to find answers as you develop. It's a shift from reactively firefighting to proactively architecting reliable AI systems.

This way, compliance doesn't become an annoying hurdle on the road but an integrated part of your innovation strategy - and thus an enabler for sustainable business in the AI era.

In short: AI by Design is good design, good governance, and good business.