Microsoft CoPilot Feature Highlights Data Privacy Concerns

In July 2024, CoPilot, Microsoft’s generative AI chatbot, released a new feature, allowing it to access and process a company’s proprietary data stored on OneDrive. This gives Microsoft’s corporate clients a powerful tool for summarizing and analyzing internal data stored within the company’s servers, and it alters the equation that such customers use when considering the advantages of AI in the cloud versus data privacy.

GenAI has been able to analyze corporate data until now as well, but the process was too slow to be viable. Even accessing a single large document could take up to a full day, which meant that more extensive projects, such as learning from a company’s data center or from data in the cloud were off-limits to such tools.

However, now that there is a tool capable of analyzing such large quantities of enterprise data, companies will need to weigh the benefits of such a platform against the potential risks to exposing the data beyond their proprietary servers.

CoPilot OneDrive vs. Data Privacy

When a company shares proprietary data from OneDrive with CoPilot, the information is automatically incorporated into CoPilot’s machine learning module, potentially exposing sensitive data to the AI. Even though Microsoft’s CoPilot offers commercial data protection, it is not clear what the result of ending the contract with Microsoft would be: will the propriety data be deleted? If so, what part of it, and would any remain in the public domain?

“Of course, once Microsoft’s AI ‘knows’ the contents of your company’s internal documents, you’ll be less likely to ever sever your ongoing subscription,” states Mark Hachman, senior editor at PCWorld.  Companies will be less likely to terminate their subscription only to further expose that data to a competing AI module.

Microsoft maintains that the CoPilot OneDrive feature uses an exclusive learning module for that company’s data, never sharing it with its external AI apps. This means that CoPilot should be immune from leaks to the world outside the company’s domain. Nonetheless, Microsoft admits that CoPilot is not encrypted end-to-end, such that it is potentially vulnerable to hacks, and even internally, data privacy could be compromised by exposing it to employees without proper access rights.

Fully Homomorphic Encryption for Private LLM

The most promising innovation on the horizon to secure a company’s privacy from Artificial Intelligence is Fully Homomorphic Encryption. FHE allows applications to perform computation on encrypted data without ever needing to decrypt it.  FHE allows CoPilot to perform analysis exclusively on encrypted data, so that proprietary data is never actually exposed to the learning module.

Chain Reaction is at the forefront of the race to design and produce a processor that will enable real-time processing of Fully Homomorphic Encryption. The processor will enable AI to process data without compromising it, creating a Private LLM (Large Language Model). Corporations will finally be able to benefit from using Artificial Intelligence like CoPilot without the fear that proprietary code and sensitive corporate information will be compromised.

Fortifying Privacy in the AI Era with Privacy Enhancing Technologies

The EU AI Act came into effect on August 1, 2024, marking an important step in mitigating the risks associated with AI deployments. This legislation focuses on creating a comprehensive regulatory framework to ensure the safe use of AI across various sectors. It aims to establish a risk-based approach to AI regulation, with strict requirements for high-risk AI systems, and it encourages innovation while safeguarding fundamental rights. The EU AI Act parallels efforts in the United States, where the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, issued on October 30, 2023, also addresses the challenges posed by AI technology.

Fully Homomorphic Encryption (FHE) Bridges the Gap

These initiatives are major steps toward ethical AI development, emphasizing the principles of safety, security, and trust. While these steps are laudable, they also spotlight an underlying truth: relying solely on governmental regulation may fall short in the face of rapid technological advancement. Achieving genuine privacy security necessitates moving beyond legalities and embracing the capabilities of cutting-edge Privacy Enhancing Technologies (PETs).

PETs encompass a range of strategies designed to fortify individual privacy in a connected world. From anonymization to data minimization, PETs work to curtail unnecessary data exposure and grant users greater control. Among these technologies, Fully Homomorphic Encryption shines as a beacon of innovation and protection.

Fully Homomorphic Encryption (or FHE) is a cryptographic breakthrough that permits computations on encrypted data without the need for decryption. In simple terms, it empowers data to remain encrypted while being processed, ensuring that sensitive information is never fully revealed. This transformative concept has the potential to revolutionize AI-powered landscapes by preserving data confidentiality during analysis.

Key industry leaders are already at the forefront of embracing FHE. Tech giants like Microsoft, Google, Apple, IBM, and Amazon have implemented FHE tools and libraries, paving the way for broader adoption of this potent technology. Unlike traditional encryption methods, which mandate data decryption for analysis, FHE operates entirely within the encrypted domain. This leap forward ensures that privacy remains paramount, addressing the core privacy-utility trade-off.

Deploying AI without Compromising Privacy

Consider the implications in the healthcare sector. Medical researchers can use advanced AI to analyze encrypted patient data without exposing individual health records, achieving a delicate balance between data utility and privacy. The healthcare sector is just one of many that would benefit greatly from implementing AI tools along with Fully Homomorphic Encryption (FHE). In the insurance industry, AI can assess risk and personalize policies based on encrypted data. Retailers can analyze purchase data for trend prediction and personalized experiences, while in education, AI can tailor learning experiences while keeping student records secure.

Governmental emphasis on responsible innovation and legislation aligns perfectly with the integration of PETs. Privacy enhancing technologies like FHE can bridge the gap between regulations and technological advancement. This union ensures that innovation flourishes while individuals’ rights and safety remain uncompromised. The fusion of robust privacy solutions and regulatory initiatives is the driving force behind a digital ecosystem where privacy and progress coexist harmoniously.

In the end, the protection of privacy is not a mere aspiration but a steadfast commitment that demands both ethical principles and powerful technological tools. As the digital landscape evolves, the recognition that privacy preservation requires more than trust reaffirms the importance of privacy enhancing technologies. Beyond regulation and commitment lies the realm of actualization, where FHE becomes the linchpin of a privacy-centric AI era.

About the EU AI Act

The EU AI Act is the first comprehensive regulation on artificial intelligence by a major regulator. It categorizes AI applications into three risk levels. First, applications with unacceptable risks, like government-run social scoring similar to China’s, are banned. Second, high-risk applications, such as CV-scanning tools for job applicants, must meet specific legal requirements. Finally, applications not banned or deemed high-risk are mostly left unregulated.

About the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

The order outlines the administration’s AI policy goals and directs executive agencies to act accordingly. Goals include promoting AI industry competition and innovation, upholding civil and labor rights, protecting consumer privacy, setting federal AI procurement policies, developing watermarking for AI content, preventing intellectual property theft from generative models, and maintaining global AI leadership.

Before J.A.R.V.I.S Goes Haywire: The Need for FHE in AI Agents

Anyone who has seen the Iron Man movies has probably thought how great it would be to have your own J.A.R.V.I.S., Tony Stark’s personal AI assistant. According to recent reports, many of today’s tech giants are working on very similar AI agents, personal assistants who organizes your busy work schedule and handle tedious activities that reduce productivity.

OpenAI, Microsoft, Google, and others are investing heavily in AI agents as the next generation of AI after chatbots. They are actively developing agent software designed to automate intricate tasks by assuming control over a user’s devices. Imagine never needing to manage payroll, write memos, return messages, or even book your own travel reservations. The AI agent would automatically manage your basic work assignments, leaving you time to focus on more important matters.

AI Agents and Your Data

While this sounds great, companies should tread carefully before allowing such AI agents into their workplaces. By granting an AI agent access to corporate devices, companies introduce significant security vulnerabilities to their proprietary data and that of their clients.

For example, employees could unwittingly expose sensitive information to the AI agent, or they could inadvertently open avenues for unauthorized access to data stored on the shared devices.

In addition, utilizing AI agents for certain tasks, such as gathering public data or booking flight tickets, would lead to significant data privacy and security risks. Automated AI agents would have authorization to access and transmit personal and proprietary information, potentially leading to unwanted data disclosures that could lead to reputational and financial damage.

In fact, the AI agent software has an inherent security flaw at its core, namely that it revolves around a Large Language Model (LLM), the machine learning module of the AI. Every piece of information that the agent accesses and every interaction the agent conducts is necessarily grafted into its LLM and could be churned back by the AI agent to other users.

Fully Homomorphic Encryption Secures AI Agents

To address these security threats, a robust, proactive encryption protocol is needed to safeguard the sensitive data processed by AI agents. The most promising innovation in development to secure privacy from AI agents is Fully Homomorphic Encryption. FHE allows applications to perform computation on encrypted data without ever needing to decrypt it. The AI agent would be unable to store confidential information in its LLM because that private information would always remain encrypted thanks to FHE.

Chain Reaction is at the forefront of the race to design and produce a processor that will enable real-time processing of Fully Homomorphic Encryption. This cutting-edge technology will enable AI agents to serve as loyal aides and personal assistants, while preventing them from exposing proprietary or personal data. Corporate enterprises could then confidently take advantage of artificial intelligence to increase productivity and profits without fear that their code and employees’ sensitive information is being compromised.

How Privacy Enhancing Technologies Can Protect Us at the Airport

Privacy Enhancing Technologies (PETs) address the dire need to safeguard private and proprietary data in a variety of industries and platforms. One of the possible applications of PETs is in the field of travel, where they could be introduced in TSA checks at the airport.

In March 2024, the TSA rolled out a new experimental self-service screening system security check at the Harry Reid International Airport in Las Vegas. Passengers who use the self-screening stations are asked to voluntarily share their ID, including name and picture, as well as their height, travel information, and even an X-ray of the contents of their carry-on.

The goal of the self-screening is to expedite the screening process. However, uploading, sharing, and digitally storing these highly personal details raises major privacy concerns.

The TSA has faced privacy issues in the past as well. In 2023, a Swiss hacker leaked the 2019 TSA No-Fly List containing the personal information of 1.5 million people who were barred from flying due to security concerns. The TSA regularly shares its No-Fly List with airlines around the world so they can screen passengers, and this leaked list was discovered on an unsecured server belonging to CommuteAir.

How can the TSA and airline industry keep individuals’ private information safe, while still maintaining the same vigilance about potential security threats?

Fully Homomorphic Encryption Protects Your Privacy

The most promising PET for securing passenger privacy is Fully Homomorphic Encryption (FHE). FHE allows applications to perform computation on encrypted data without ever decrypting it.  By applying FHE to its databases, the TSA could maintain the ability to analyze and share passenger data without revealing or compromising any of the information. This would also hold true for any outside agency or company that would need to use the data, including airlines. Whether in self-screening stations or on airline servers, sensitive information would always be stored, transferred, and, thanks to FHE, even processed in its encrypted format.

Chain Reaction is at the forefront of the race to design and produce a processor that will enable real-time processing of Full Homomorphic Encryption. This cutting-edge technology will usher the TSA and similar governmental agencies into a new era of privacy-preserving data collaboration and security.

AI is Driving Growth in Cloud Usage, But Concerns About Privacy Persist

Artificial Intelligence (AI) can access large pools of data and make logical inferences from billions of pages of information. Therefore, it can solve complex problems and can be leveraged to enhance business outcomes.

As a result, cloud providers have seen double-digit growth in traffic since the release of ChatGPT, Gemini, Claude, and other GenAI tools. Amazon Web Services, for example, reported an increase of 13%, and Alphabet saw an increase of 26% in its cloud unit, year-over-year. Similarly, Microsoft said its Azure cloud business grew 30% and credited 6 percentage points of its growth directly to the increase in demand for AI.

“The uptick in AI usage on the cloud is in large part thanks to enterprise customers testing use cases,” said Stefan Slowinski, global head of software research at investment bank BNP Paribas Exane. However, financial companies, healthcare institutions, and government agencies, among others, are still waiting to see whether crucial privacy concerns can be resolved before fully investing in AI.

One of those concerns is that every time a user interacts with an Artificial Intelligence tool, the information from that interaction is automatically recorded into the machine learning functionality, exposing private data to the AI learning model and the hosting company. Currently, there is no way to delete or prevent this from happening. Slowinski points out that the risk for the hyperscalers who host AI models is that too few use cases (for AI) make it past the pilot phase because they are not able to develop clear enough safety controls.

Fully Homomorphic Encryption Creates Private LLM

For AI to fulfill its potential, proprietary and sensitive information must be secured. The most promising innovation under development to secure private data is Fully Homomorphic Encryption. FHE allows AI applications to perform computation and analysis exclusively on encrypted data, so that it is never exposed to the learning module.

As an emerging leader in Privacy Enhancing Technologies, Chain Reaction is at the forefront of the race to design and produce 3PU™, a revolutionary processor that will enable real-time processing of Fully Homomorphic Encryption. This technology will enable AI to process data without compromising privacy, creating a Private LLM (Large Language Model). This would finally enable corporate entities, public institutions, and hyperscalers to embrace the full use of AI, confident that their proprietary code and sensitive information remain secure and anonymous.

From Satoshi to Stability: Bitcoin Distinctive Edge
Emerging from a bear market, 2023 proved to be a remarkably successful year for Bitcoin, and 2024 has begun on an equally impressive note, with the cryptocurrency reaching the $50,000 mark for the first time in over two years. With bitcoin gaining more public exposure, it’s important to explore what makes it stand out among other cryptocurrencies.
Approximately 22,932 cryptocurrencies have materialized since Bitcoin arrived on the scene in groundbreaking fashion in 2009. While cryptocurrencies vary in use, they all utilize a distributed ledger technology known as the blockchain, a shared public database in which new entries can be added but existing entries cannot be altered.

Types of cryptocurrencies include stablecoins, non-fungible tokens, central bank digital currencies, security assets, and crypto assets like Bitcoin. It’s easy to assume that all cryptocurrencies are more or less the same, but bitcoin stands apart in the crowded cryptocurrency landscape, not merely as the pioneering digital currency, but also due to its distinctive protocol. This sets it apart from its peers, cementing its role as the ‘anchor’ of the industry – a symbol of stability and enduring presence.

Bitcoin distinguishes itself from other centralized cryptocurrencies by offering the largest available network, a predetermined supply, full sovereignty over one’s coins, and a transparent structure that cannot be easily influenced by any one organization.

In fact, Bitcoin is sometimes referred to as “digital gold”. People in all walks of life have always looked for assets that were not controlled by governments, and for centuries, gold has been the standard-bearer for such commodities. However, as governments have moved off the gold standard and as we have moved into the Internet age, Bitcoin has become the ultimate example of a decentralized financial asset.

Much like gold, Bitcoin has a low correlation to the stock market, and it has therefore become a viable option for those looking for hedging opportunities, fungibility, and many of the other characteristics of gold.

A Brief History Of “Digital Gold”

Bitcoin was invented 15 years ago by the mysterious Satoshi Nakamoto – a pseudonym for a person or persons unknown. Published on Oct. 31, 2008, a transcript known as the Bitcoin Whitepaper laid out plans for a computer technology that would enable multiple parties to send payments online without verification from financial institutions, such as banks. The creators of Bitcoin utilized the principles of cryptography, to develop a decentralized digital currency system that ensures secure and verifiable transactions, protecting them from unauthorized access or tampering by third parties.

A set of Bitcoin transactions from a certain period of time is known as a block. Each block is stacked in such a way that one block depends on its predecessor. This series of blocks, or the blockchain, holds a complete, public, and permanent record of every Bitcoin transaction. This means that anyone can see where Bitcoin is flowing at any given time.

In the years since its introduction, Bitcoin has reigned as the world’s largest cryptocurrency in terms of market capitalization. Decentralized digital scarcity is the real innovation and Bitcoin was the first to introduce this concept.

There will only ever be 21 million Bitcoins in existence, meaning that people who own Bitcoin will not have to worry about their coins losing value through the inflation caused by printing more of a currency. This has been a major issue for modern currencies, which can rapidly decrease in value over time.

Furthermore, as most learned in their first economics class, the supply of an asset plays a big role in determining its price. A scarce asset is likely to be more valuable, meaning the limited supply of Bitcoin is an attractive quality.

Another attractive feature of Bitcoin is its anonymity. While Bitcoin transactions are recorded on the blockchain, users’ identities are kept pseudonymous. This helps create privacy for those who wish to keep their financial transactions secret.

New adoptions like Ordinals are a great example of how Bitcoin technology is continuing to evolve over time. Ordinals are digital assets that can be inscribed on a satoshi, the lowest denomination of a Bitcoin. In this sense, Ordinals are very similar to Non-Fungible Tokens, or blockchain-based tokens that each represent a unique asset like a piece of art, media, or even a contract.

As more people and businesses adopt and use Bitcoin, its network effect grows stronger. The larger the network, the more valuable and widely accepted Bitcoin becomes as a form of currency.

As such, it seems that although other cryptocurrencies will come and go, Bitcoin’s popularity has remained strong for over a decade, and it is built to last.

Bitcoin Miners Create A Network With Integrity

Bitcoin’s network is key to helping maintain its viability and integrity.

Bitcoin runs on a peer-to-peer, decentralized network to help verify transactions, keep records, and prevent fraud. This network relies on a process known as Bitcoin mining to verify that all transactions are legitimate and to add new coins to the network. This process prevents people from spending Bitcoin that they don’t own.

Participants in the Bitcoin network, called miners, verify and collect transactions. Miners are responsible for ensuring that all transactions are valid and that no fraud is taking place within the network.

Before a miner can add a transaction to the blockchain they must solve a complex mathematical problem. This concept is known as “proof of work” and typically requires powerful, single-purpose computers to verify and approve transactions. Miners compete with each other to solve the problem and the first one to solve it is allowed to verify the transaction and collect the reward.

Further verification by fellow miners results in a new block being added to the blockchain. The successful miner is rewarded with newly created Bitcoins and transaction fees from the added block.

The fact that Bitcoin miners are rewarded with Bitcoin is a critical component that helps maintain the network’s integrity. If miners were to try and undermine the network, then the value of Bitcoin would likely fall, along with the price of Bitcoin.

This reward system, which helps align miners’ incentives with that of the network, is one of many characteristics that make Bitcoin’s model so unique. After laying the groundwork for other cryptocurrencies, Bitcoin has the longevity and track record that only come with a cryptocurrency that has seen such widespread adoption.

These are among the many reasons why Bitcoin has been called digital gold. With a finite supply, Bitcoin is the rarer of the two assets, benefiting those who have invested in the 19 million bitcoins that have been mined so far and are currently in circulation.