If there’s one organization that understands the cost of failure, it’s NASA. Their 10 Rules for Developing Safety-Critical Code were designed to prevent catastrophic errors in space missions — where even a minor software glitch can mean the difference between success and disaster.
These rules prioritize simplicity, predictability, verifiability, and controlled dependencies — qualities that should be second nature in generative AI (GenAI) development as well. While NASA’s engineers build software that operates in space, GenAI teams build systems that influence critical decisions in healthcare, finance, law, cybersecurity, and beyond. When an AI system provides false information or behaves unpredictably, the consequences can be just as severe.

Inspired by NASA’s meticulous approach, here is my take on ten rules for developing GenAI applications that people can actually trust.

  1. Keep the Model as Simple as Possible
    NASA minimizes complexity to prevent failures. GenAI models should not be overly complex when a simpler system suffices. Avoid excessive fine-tuning or unnecessary parameters that introduce unpredictability.
  2. Rely on Proven, Transparent Data Sources
    Just as NASA vets every component, GenAI should only train on trusted, high-quality data. Poor training data leads to misinformation and bias.
  3. Prioritize Determinism Over Randomness
    NASA software behaves predictably. GenAI should limit variability in high-stakes applications by controlling randomness (e.g., temperature settings, RAG techniques).
  4. Validate and Verify Before Deployment
    NASA rigorously tests software before launch. GenAI should be validated with real-world data, edge cases, and human-in-the-loop (HITL) oversight.
  5. Fail Safely and Gracefully
    When NASA software fails, it preserves mission integrity. GenAI should avoid hallucinations and flag uncertainty instead of generating misleading responses.
  6. Limit External Dependencies
    NASA restricts third-party software to minimize risk. GenAI should avoid excessive reliance on unstable APIs, plugins, or unverified external data.
  7. Design for Human Oversight
    NASA never fully trusts automation — GenAI should support, not replace, human decision-making.
  8. Detect and Mitigate Bias
    Bias can emerge in both training data and model behavior. Regularly audit for biases, track AI outputs, and retrain models as needed to ensure fairness.
  9. Continuously Monitor and Update
    NASA updates software to maintain safety. GenAI must be monitored for bias, drift, and emerging risks.
  10. Establish Clear Ethical and Safety Guidelines
    NASA follows strict safety protocols — GenAI teams must define ethical AI principles, governance policies, and accountability measures.