When AI Models Argue, You Win: Introducing ILETP

Jan 22, 2026

Introduction to ILETP

Imagine a digital conference room where ChatGPT, Claude, Gemini, and Llama are all sitting around a table. You ask a question, and instead of one of them politely hallucinating an answer, they all discuss it. They debate, they cross-check facts, and they only present you with the solution they agree on. This is not the setup for a geeky joke—this is the core concept behind ILETP (Inter-LLM Ensemble Trust Platform).

In a world where “trust me, bro” is the standard operating procedure for many AI chatbots, ILETP treats divergence as a feature, not a bug. It is an open-source platform concept designed to coordinate multiple Large Language Models (LLMs) to interact, critique, and synthesize outputs. The goal? To give you, the human user, a “Trust Score” you can actually rely on, rather than blind faith.

Key Features

ILETP is built for those who need accountable and auditable AI. Here is what makes it special:

  • Ensemble Collaboration: It orchestrates real-time discussions between different models (e.g., Anthropic’s Claude, OpenAI’s GPT, Google’s Gemini) to reach a consensus.
  • Trust Scoring: The platform generates a quantifiable trust score based on the agreement level between models. If they all agree, the score goes up. If they bicker, it goes down.
  • Divergence as a Feature: Instead of hiding disagreements, ILETP highlights them, allowing you to see where the models differ and why.
  • Vendor Neutrality: It is designed to be model-agnostic, preventing vendor lock-in and allowing you to swap auditable “judges” as needed.
  • AI-Directed Development: Fun fact—the proof-of-concept app was built in about 24 hours by a Product Manager with no coding experience, using Claude as the lead developer within Xcode.

Installation Guide

Currently, the ILETP repository provides a “starter kit” meant for Apple ecosystems, specifically using Swift and Xcode. It is a playground for developers to fork and extend.

Prerequisites

You will need a Mac with Xcode installed (version 16.0 or later recommended) and valid API keys for the services you wish to test (OpenAI, Anthropic, Google, Mistral).

Cloning the Repo

To get started, clone the repository to your local machine:

git clone https://github.com/peterzan/ILETP.git

Setting Up Credentials

Open the project in Xcode. You will need to locate the configuration section (often within the app’s main swift files or a dedicated `Secrets` file, depending on the current build) to input your API keys for the respective LLM providers.

How to Use ILETP

The current implementation serves as a proof-of-concept chat interface.

The Chat Interface

Once you run the app in the Xcode Simulator or on a device, you are presented with a chat window. When you send a prompt, the application dispatches it to the configured LLMs simultaneously.

Interpreting Results

Instead of a single stream of text, you will see responses from multiple “agents.” The system then attempts to synthesize these responses. If the models provide conflicting information, the interface flags this, allowing you to decide which expert to trust. It is like having a second, third, and fourth opinion instantly.

Contribution Guide

The author, Peter Zan, has explicitly stated that this repository is a “living experiment” and a foundation for others. He is not actively maintaining feature updates, which means the keys to the castle are effectively yours.

Forking is Encouraged

The best way to contribute is to fork the repository. Treat the existing code as a base layer. You are encouraged to:

  • Rewrite the orchestration logic.
  • Port the Swift code to Python or TypeScript for broader web usage.
  • Create new “Trust Score” algorithms.

If you build something cool, the author invites you to open an issue or send an email to share your creation.

Community & Support

Since this is an experimental open-source project, support is community-driven.

  • GitHub Issues: Use the Issues tab for discussion, but remember that the primary maintainer is encouraging forks rather than promising immediate fixes.
  • Documentation: The `supporting-docs` folder in the repo contains deep dives into the philosophy and specifications of the platform.

Conclusion

ILETP is a bold step towards “Adult AI”—systems that are accountable, transparent, and verified. It acknowledges that LLMs can be wrong and builds a safety net of consensus around them. Whether you are a Swift developer looking to tinker with multi-agent systems or an AI researcher interested in trust metrics, ILETP provides a fascinating blueprint for the future of automated collaboration.

Useful Resources

  • ILETP GitHub Repository: The official source code and documentation.
  • Xcode: Required IDE for running the Swift-based proof of concept.
  • OpenAI API: One of the key providers used in the ensemble.

Frequently Asked Questions

Is ILETP a new AI model like GPT-4?

No, ILETP is not a model itself. It is a platform (hence the ‘P’) that coordinates existing models like GPT-4, Claude 3, and Gemini. Think of it as a manager that oversees a team of AI employees to ensure they are doing their job correctly.

Do I need to know Swift to use this?

To run the current proof-of-concept application provided in the repository, yes, you need some familiarity with the Apple ecosystem (Xcode/Swift). However, the specifications and architecture are language-agnostic, and developers are encouraged to port the concept to Python, JavaScript, or other languages.

Can I use ILETP for commercial projects?

Yes, the code is licensed under the Apache 2.0 License, which is very permissive and allows for commercial use, modification, and distribution. The documentation is licensed under CC BY 4.0.

Does it cost money to run?

The ILETP software itself is free and open-source. However, because it queries multiple AI models (like OpenAI, Anthropic, etc.) simultaneously, you will be paying for the API usage of each model involved in the conversation. This means a single query might cost 3x or 4x more than a standard single-model prompt, but you are paying for the increased reliability.

[/et_pb_section]