In today’s digital world, personal data is scattered across various platforms owned by tech giants, limiting user control. Current AI applications, like ChatGPT and Google Gemini, fail to provide truly personalized services due to centralized data storage, raising privacy concerns and preventing tailored experiences.
PIN AI addresses these challenges by offering a decentralized on-device personal intelligence system, secure edge computing, and Trusted Execution Environments (TEEs). This approach ensures personal data remains private while enabling seamless interactions with AI agents tailored to individual needs. With these technologies, PIN AI empowers users to control their data and experience the full potential of AI.
Personal Intelligence Network AI (PIN AI) is an open platform for Personal AI. It enables users to own and control their personal data while training and deploying AI models tailored to their needs. PIN AI combines on-device computation, Trusted Execution Environment (TEE) security, and blockchain verification to facilitate seamless interactions between humans and AI agents, all mediated by Personal AI. This platform connects users to a marketplace of specialized AI agents that can perform tasks such as booking appointments, analyzing data, managing finances, and simplifying daily digital tasks, all while preserving privacy.
PIN AI is built around Personal AI, which enables users to have AI models uniquely tailored to their individual needs and preferences. Unlike traditional AI systems that operate on centralized servers, Personal AI models are trained and deployed directly on users’ devices. This ensures that the AI can provide personalized assistance in a private and secure way.
One of the core principles of PIN AI is data ownership. In the current digital landscape, users’ personal data is often controlled by large tech companies, limiting their ability to manage and benefit from their own information. PIN AI addresses this issue by giving users full control over their data. Users can decide what data to share, with whom, and under what conditions. This empowers individuals to monetize their data if they choose while ensuring their privacy and security.
PIN AI introduces the concept of an Agent Economy, where specialized AI agents can perform a wide range of tasks for users. These agents can book appointments, analyze data, manage finances, and simplify daily digital tasks. The Agent Economy is a marketplace where users can access and deploy these AI agents based on their specific needs.
The PIN Onchain Protocol leverages blockchain technology through a series of smart contracts to ensure the integrity and security of data handling and AI agent interactions, enabling on-chain validation and verification processes. Key aspects of the protocol include:
Verifiable Computing Framework: The Verifiable Computing Framework is responsible for ensuring the accuracy and reliability of off-chain computations. It achieves this by validating Trusted Execution Environment (TEE) attestation reports and monitoring the activity of decentralized services, including God Models, Data Connectors, and On-Device Large Language Models (LLMs). This framework ensures that all computations and interactions are transparent and tamper-proof.
Agent Registry: The Agent Registry is a decentralized registry for AI agents and data services within the PIN AI network. It maintains a comprehensive list of AI agents, each with associated reputation scores and staking mechanisms. This registry enables users to discover and deploy AI agents based on their performance and reliability, fostering trust and accountability within the ecosystem.
The PIN Onchain Protocol enables on-chain validation of TEE attestation reports, ensuring that only verified and secure computations are performed. This validation process involves checking the integrity of the nodes and the computations they carry out. Additionally, the protocol validates zk-proofs generated after agent actions, further enhancing the security and transparency of AI interactions. To ensure that the nodes within the network are connected, stable, and high-performing, the PIN Onchain Protocol includes mechanisms for monitoring worker activity. This monitoring process helps maintain the overall health and performance of the network, ensuring that users receive reliable and efficient AI services.
Data Connectors
Data Connectors are software components that securely fetch, process, and structure personal data within Trusted Execution Environments (TEEs). They enable users to connect to and retrieve information from major Web2 and Web3 platforms, such as Google, Apple, Meta, Amazon, MetaMask, and Phantom, while ensuring data privacy and user control. Data Connectors process and structure data into a personal knowledge graph, making it accessible and optimized for use by Personal AI.
How Data Connectors Work
The connector secures access to the user’s data via API calls after authorization, then processes it into a personalized knowledge graph. It submits hardware-backed attestation reports to the PIN Onchain Protocol, verifying secure handling in Trusted Execution Environment (TEE) nodes. Verified data is stored securely per user preference (local device, cloud, or dedicated storage), preserving privacy and enabling direct AI access.
On-device Large Language Models (LLMs) run directly on user devices—such as smartphones, laptops, or private clouds—to ensure sensitive data remains under user control, safeguarding privacy through localized processing. Key features include contextual personalization via a continuously updated “Personal Index” derived from user interactions, history, and preferences, enabling highly tailored responses; a hybrid architecture that combines local processing with optional cloud resources under user discretion; secure data handling using trusted hardware enclaves (TEE) to prevent unauthorized access; and iterative learning capabilities that allow the model to adapt and improve over time as user needs evolve.
How On-Device LLMs Work
Models are stored in compressed form on an SSD/HDD or within a private cloud and accessed via the PIN AI app. AI computations are executed directly on the device’s CPU or GPU, ensuring sensitive operations remain localized to safeguard security and privacy. A hybrid approach combines this local processing with optional cloud resources for complex tasks, all under user control, allowing seamless scalability while preserving autonomy. Local personalization is further supported through on-device learning and model updates performed during downtime, enabling continuous adaptation to user behavior and preferences without compromising data privacy.
Guardian of Data (God) Models are specialized validation models operating within Trusted Execution Environments (TEEs) across the PIN AI Network. They continuously assess Personal AIs to ensure accuracy and alignment with user data, while providing feedback to users to enhance Personal AI development through Data Connector integration. These models form the foundation for a robust Agent Ecosystem by encouraging iterative refinement of Personal AIs, and actively detect and mitigate threats like data manipulation, adversarial behaviors, and synthetic data injection to safeguard system integrity.
God Models Evaluation Framework
Initialization: Personal AI records basic metadata, including verified data sources, interaction history, and activity patterns.
Periodic or Randomized Queries: The God Model assesses the effectiveness of Personal AI in delivering user-specific contextual information.
Response Verification: The God Model checks the response against verified data logs or a previously stored state to assess effectiveness.
Score Adjustment: The God Model assigns a “knowledge score” based on the responses, adjusting it for consistency, temporal accuracy, and confidence levels.
The Intent Matching Protocol is a crucial component of the PIN AI platform, designed to coordinate seamlessly between users’ Personal AI and external AI agents. This protocol ensures efficient, privacy-preserving, and verifiable service execution by allowing users to express intents—requests for AI-driven actions or services—while enabling AI agents to compete for optimal fulfillment.
Key Components of Intent Matching Protocol
Intent Submission: Users submit structured intent requests through their Personal AI. These requests specify parameters such as service category, budget, and constraints.
Bid Submission: AI agents respond to users’ intent requests with competitive bids. These bids include details on service capabilities, pricing, and reputation scores. The bidding process allows AI agents to showcase their strengths and qualifications, ensuring that users receive high-quality services.
Intent Matching Algorithm: The protocol employs an intent matching algorithm to evaluate agent bids based on various factors, such as preference embeddings, bid competitiveness, and reputation metrics. This algorithm ensures that the selected AI agent provides optimal service quality at minimal cost.
The Agent Services Protocol is a decentralized marketplace that connects users’ Personal AI with specialized AI agents. It facilitates intent matching, transparent service execution, and programmable payments, fostering an open and competitive economy for AI agent innovation.
Key Components of the Agent Services Protocol
Trusted Execution Environments (TEEs) are isolated enclaves within processors or data centers where programs can run without interference from the rest of the system. These environments protect sensitive data and authenticate and verify computations performed within them. TEEs ensure that even if the main system is compromised, the data and processes within the TEE remain secure.
Trusted Execution Environments (TEEs) provide secure, isolated compute environments within the PIN AI network for executing sensitive tasks, ensuring confidentiality and integrity. These nodes can be customized by participants to support diverse workloads and use cases, maintaining stringent security standards. Key services include hosting Data Connectors in their confidential environments to securely fetch and process user data, ensuring privacy throughout. TEEs also execute private LLM inference and other privacy-preserving computations critical to the PIN Network, enabling sensitive AI operations in trusted, tamper-resistant environments. This setup ensures critical processes—from data handling to advanced model inference—are performed securely and transparently under user control, upholding the highest data protection and operational reliability standards.
Verification is fundamental to ensuring a trust-minimized environment for TEEs in the PIN Network. This process involves verifying the hardware integrity of a TEE device (e.g., an Intel SGX device), confirming that the CPU is genuine and that the certificate chain is valid and issued by a trusted manufacturer. Before a TEE executes any programs, the process of remote attestation ensures that the TEE is running an untampered version of the expected code, providing security assurance at a hardware level. Data Connectors record verification details as metadata on-chain, enabling transparency and auditability. This process is used when registering TEE devices on the network.
TEE task validation ensures that tasks executed by a TEE node are properly validated and penalized if they fail. The process follows these steps:
Proof Submission: The TEE node submits proof of its work to PIN Onchain Validators.
Validation: Validators check if the submitted proof is valid and the task was completed correctly.
Accountability: If the task fails validation, the TEE node is penalized by slashing its staked funds.
The PIN protocol is the backbone of the open-source ecosystem built around PIN AI. It provides trust-minimized activity tracking and value exchange, access to valuable personal data, and an open innovation platform for new AI services. This protocol ensures the integrity and security of data interactions within the PIN AI network, fostering a robust and transparent ecosystem.
PIN’s core two-sided market connects users/Personal AIs with External AIs, with service value growing as users share more contextual data. Its Proof-of-Engagement (PoE) protocol incentivizes participation via two components:
This system bootstraps engagement by rewarding data sharing and provable, high-value interactions
End Users: They are incentivized to connect their personal data to the PIN network while maintaining ownership and privacy via data connectors. Their data provides the rich context needed for agent services to function effectively.
Data Connectors: They are part of the infrastructure serving the PIN network, operated by third parties. Secured by a stake-and-slash mechanism, with operators and stakers rewarded for their contributions.
Agent Services: New agent services can be easily deployed via Agent Links. These agents leverage user contextual data to better serve user intents and provide valuable services. Agent service operators are crypto-economically secured and incentivized.
PIN AI launched its privacy-focused app on February 13, 2025, offering a customizable AI experience that runs directly on smartphones via open-source models like DeepSeek and Llama. Available on iOS and Android, it aggregates personal data from platforms like Google or financial services into a secure “data bank,” enabling personalized insights through features like the “GOD Rating” (measuring AI understanding) and “Ask PIN AI” for tasks such as travel planning. The app balances on-device processing with a private network to maintain efficiency and privacy.
PIN AI’s business model charges minimal fees for third-party AI access to user data (with explicit consent), similar to Ethereum’s gas fees. The rollout began with Android for early adopters (e.g., Discord members), followed by iOS, with an invite-only beta phase before full public release.
PIN AI has secured significant funding to support its mission of creating a decentralized, personalized AI platform. Strategic investments from prominent venture capital firms and angel investors have marked the fundraising journey. In September 2024, PIN AI raised $10 million in a pre-seed funding round. This round was led by Andreessen Horowitz (a16z), a well-known venture capital firm with a strong track record in supporting innovative tech startups. Other notable investors in this round included Hack VC, Foresight Ventures, and several angel investors, such as Illia Polosukhin and Scott Moore.
PIN AI addresses data fragmentation and privacy issues with a decentralized on-device personal intelligence system, secure edge computing, and Trusted Execution Environments (TEEs). This innovative approach empowers users to regain control over their data while benefiting from personalized AI interactions. The robust architecture, including the Personal AI Protocol, Intent Matching Protocol, and Agent Services Protocol, fosters a dynamic AI ecosystem that enhances data ownership and privacy.
In today’s digital world, personal data is scattered across various platforms owned by tech giants, limiting user control. Current AI applications, like ChatGPT and Google Gemini, fail to provide truly personalized services due to centralized data storage, raising privacy concerns and preventing tailored experiences.
PIN AI addresses these challenges by offering a decentralized on-device personal intelligence system, secure edge computing, and Trusted Execution Environments (TEEs). This approach ensures personal data remains private while enabling seamless interactions with AI agents tailored to individual needs. With these technologies, PIN AI empowers users to control their data and experience the full potential of AI.
Personal Intelligence Network AI (PIN AI) is an open platform for Personal AI. It enables users to own and control their personal data while training and deploying AI models tailored to their needs. PIN AI combines on-device computation, Trusted Execution Environment (TEE) security, and blockchain verification to facilitate seamless interactions between humans and AI agents, all mediated by Personal AI. This platform connects users to a marketplace of specialized AI agents that can perform tasks such as booking appointments, analyzing data, managing finances, and simplifying daily digital tasks, all while preserving privacy.
PIN AI is built around Personal AI, which enables users to have AI models uniquely tailored to their individual needs and preferences. Unlike traditional AI systems that operate on centralized servers, Personal AI models are trained and deployed directly on users’ devices. This ensures that the AI can provide personalized assistance in a private and secure way.
One of the core principles of PIN AI is data ownership. In the current digital landscape, users’ personal data is often controlled by large tech companies, limiting their ability to manage and benefit from their own information. PIN AI addresses this issue by giving users full control over their data. Users can decide what data to share, with whom, and under what conditions. This empowers individuals to monetize their data if they choose while ensuring their privacy and security.
PIN AI introduces the concept of an Agent Economy, where specialized AI agents can perform a wide range of tasks for users. These agents can book appointments, analyze data, manage finances, and simplify daily digital tasks. The Agent Economy is a marketplace where users can access and deploy these AI agents based on their specific needs.
The PIN Onchain Protocol leverages blockchain technology through a series of smart contracts to ensure the integrity and security of data handling and AI agent interactions, enabling on-chain validation and verification processes. Key aspects of the protocol include:
Verifiable Computing Framework: The Verifiable Computing Framework is responsible for ensuring the accuracy and reliability of off-chain computations. It achieves this by validating Trusted Execution Environment (TEE) attestation reports and monitoring the activity of decentralized services, including God Models, Data Connectors, and On-Device Large Language Models (LLMs). This framework ensures that all computations and interactions are transparent and tamper-proof.
Agent Registry: The Agent Registry is a decentralized registry for AI agents and data services within the PIN AI network. It maintains a comprehensive list of AI agents, each with associated reputation scores and staking mechanisms. This registry enables users to discover and deploy AI agents based on their performance and reliability, fostering trust and accountability within the ecosystem.
The PIN Onchain Protocol enables on-chain validation of TEE attestation reports, ensuring that only verified and secure computations are performed. This validation process involves checking the integrity of the nodes and the computations they carry out. Additionally, the protocol validates zk-proofs generated after agent actions, further enhancing the security and transparency of AI interactions. To ensure that the nodes within the network are connected, stable, and high-performing, the PIN Onchain Protocol includes mechanisms for monitoring worker activity. This monitoring process helps maintain the overall health and performance of the network, ensuring that users receive reliable and efficient AI services.
Data Connectors
Data Connectors are software components that securely fetch, process, and structure personal data within Trusted Execution Environments (TEEs). They enable users to connect to and retrieve information from major Web2 and Web3 platforms, such as Google, Apple, Meta, Amazon, MetaMask, and Phantom, while ensuring data privacy and user control. Data Connectors process and structure data into a personal knowledge graph, making it accessible and optimized for use by Personal AI.
How Data Connectors Work
The connector secures access to the user’s data via API calls after authorization, then processes it into a personalized knowledge graph. It submits hardware-backed attestation reports to the PIN Onchain Protocol, verifying secure handling in Trusted Execution Environment (TEE) nodes. Verified data is stored securely per user preference (local device, cloud, or dedicated storage), preserving privacy and enabling direct AI access.
On-device Large Language Models (LLMs) run directly on user devices—such as smartphones, laptops, or private clouds—to ensure sensitive data remains under user control, safeguarding privacy through localized processing. Key features include contextual personalization via a continuously updated “Personal Index” derived from user interactions, history, and preferences, enabling highly tailored responses; a hybrid architecture that combines local processing with optional cloud resources under user discretion; secure data handling using trusted hardware enclaves (TEE) to prevent unauthorized access; and iterative learning capabilities that allow the model to adapt and improve over time as user needs evolve.
How On-Device LLMs Work
Models are stored in compressed form on an SSD/HDD or within a private cloud and accessed via the PIN AI app. AI computations are executed directly on the device’s CPU or GPU, ensuring sensitive operations remain localized to safeguard security and privacy. A hybrid approach combines this local processing with optional cloud resources for complex tasks, all under user control, allowing seamless scalability while preserving autonomy. Local personalization is further supported through on-device learning and model updates performed during downtime, enabling continuous adaptation to user behavior and preferences without compromising data privacy.
Guardian of Data (God) Models are specialized validation models operating within Trusted Execution Environments (TEEs) across the PIN AI Network. They continuously assess Personal AIs to ensure accuracy and alignment with user data, while providing feedback to users to enhance Personal AI development through Data Connector integration. These models form the foundation for a robust Agent Ecosystem by encouraging iterative refinement of Personal AIs, and actively detect and mitigate threats like data manipulation, adversarial behaviors, and synthetic data injection to safeguard system integrity.
God Models Evaluation Framework
Initialization: Personal AI records basic metadata, including verified data sources, interaction history, and activity patterns.
Periodic or Randomized Queries: The God Model assesses the effectiveness of Personal AI in delivering user-specific contextual information.
Response Verification: The God Model checks the response against verified data logs or a previously stored state to assess effectiveness.
Score Adjustment: The God Model assigns a “knowledge score” based on the responses, adjusting it for consistency, temporal accuracy, and confidence levels.
The Intent Matching Protocol is a crucial component of the PIN AI platform, designed to coordinate seamlessly between users’ Personal AI and external AI agents. This protocol ensures efficient, privacy-preserving, and verifiable service execution by allowing users to express intents—requests for AI-driven actions or services—while enabling AI agents to compete for optimal fulfillment.
Key Components of Intent Matching Protocol
Intent Submission: Users submit structured intent requests through their Personal AI. These requests specify parameters such as service category, budget, and constraints.
Bid Submission: AI agents respond to users’ intent requests with competitive bids. These bids include details on service capabilities, pricing, and reputation scores. The bidding process allows AI agents to showcase their strengths and qualifications, ensuring that users receive high-quality services.
Intent Matching Algorithm: The protocol employs an intent matching algorithm to evaluate agent bids based on various factors, such as preference embeddings, bid competitiveness, and reputation metrics. This algorithm ensures that the selected AI agent provides optimal service quality at minimal cost.
The Agent Services Protocol is a decentralized marketplace that connects users’ Personal AI with specialized AI agents. It facilitates intent matching, transparent service execution, and programmable payments, fostering an open and competitive economy for AI agent innovation.
Key Components of the Agent Services Protocol
Trusted Execution Environments (TEEs) are isolated enclaves within processors or data centers where programs can run without interference from the rest of the system. These environments protect sensitive data and authenticate and verify computations performed within them. TEEs ensure that even if the main system is compromised, the data and processes within the TEE remain secure.
Trusted Execution Environments (TEEs) provide secure, isolated compute environments within the PIN AI network for executing sensitive tasks, ensuring confidentiality and integrity. These nodes can be customized by participants to support diverse workloads and use cases, maintaining stringent security standards. Key services include hosting Data Connectors in their confidential environments to securely fetch and process user data, ensuring privacy throughout. TEEs also execute private LLM inference and other privacy-preserving computations critical to the PIN Network, enabling sensitive AI operations in trusted, tamper-resistant environments. This setup ensures critical processes—from data handling to advanced model inference—are performed securely and transparently under user control, upholding the highest data protection and operational reliability standards.
Verification is fundamental to ensuring a trust-minimized environment for TEEs in the PIN Network. This process involves verifying the hardware integrity of a TEE device (e.g., an Intel SGX device), confirming that the CPU is genuine and that the certificate chain is valid and issued by a trusted manufacturer. Before a TEE executes any programs, the process of remote attestation ensures that the TEE is running an untampered version of the expected code, providing security assurance at a hardware level. Data Connectors record verification details as metadata on-chain, enabling transparency and auditability. This process is used when registering TEE devices on the network.
TEE task validation ensures that tasks executed by a TEE node are properly validated and penalized if they fail. The process follows these steps:
Proof Submission: The TEE node submits proof of its work to PIN Onchain Validators.
Validation: Validators check if the submitted proof is valid and the task was completed correctly.
Accountability: If the task fails validation, the TEE node is penalized by slashing its staked funds.
The PIN protocol is the backbone of the open-source ecosystem built around PIN AI. It provides trust-minimized activity tracking and value exchange, access to valuable personal data, and an open innovation platform for new AI services. This protocol ensures the integrity and security of data interactions within the PIN AI network, fostering a robust and transparent ecosystem.
PIN’s core two-sided market connects users/Personal AIs with External AIs, with service value growing as users share more contextual data. Its Proof-of-Engagement (PoE) protocol incentivizes participation via two components:
This system bootstraps engagement by rewarding data sharing and provable, high-value interactions
End Users: They are incentivized to connect their personal data to the PIN network while maintaining ownership and privacy via data connectors. Their data provides the rich context needed for agent services to function effectively.
Data Connectors: They are part of the infrastructure serving the PIN network, operated by third parties. Secured by a stake-and-slash mechanism, with operators and stakers rewarded for their contributions.
Agent Services: New agent services can be easily deployed via Agent Links. These agents leverage user contextual data to better serve user intents and provide valuable services. Agent service operators are crypto-economically secured and incentivized.
PIN AI launched its privacy-focused app on February 13, 2025, offering a customizable AI experience that runs directly on smartphones via open-source models like DeepSeek and Llama. Available on iOS and Android, it aggregates personal data from platforms like Google or financial services into a secure “data bank,” enabling personalized insights through features like the “GOD Rating” (measuring AI understanding) and “Ask PIN AI” for tasks such as travel planning. The app balances on-device processing with a private network to maintain efficiency and privacy.
PIN AI’s business model charges minimal fees for third-party AI access to user data (with explicit consent), similar to Ethereum’s gas fees. The rollout began with Android for early adopters (e.g., Discord members), followed by iOS, with an invite-only beta phase before full public release.
PIN AI has secured significant funding to support its mission of creating a decentralized, personalized AI platform. Strategic investments from prominent venture capital firms and angel investors have marked the fundraising journey. In September 2024, PIN AI raised $10 million in a pre-seed funding round. This round was led by Andreessen Horowitz (a16z), a well-known venture capital firm with a strong track record in supporting innovative tech startups. Other notable investors in this round included Hack VC, Foresight Ventures, and several angel investors, such as Illia Polosukhin and Scott Moore.
PIN AI addresses data fragmentation and privacy issues with a decentralized on-device personal intelligence system, secure edge computing, and Trusted Execution Environments (TEEs). This innovative approach empowers users to regain control over their data while benefiting from personalized AI interactions. The robust architecture, including the Personal AI Protocol, Intent Matching Protocol, and Agent Services Protocol, fosters a dynamic AI ecosystem that enhances data ownership and privacy.