# LWE/GLWE Encoding

So far, we’ve been treating a ciphertext as “it encrypts a message.” In practice, **how you&#x20;*****place*****&#x20;(encode) the message inside the plaintext space matters a lot**, because it directly affects what homomorphic operations do and whether results stay correct. This “encoding layer” sits on top of encryption and is a core design choice in TFHE.

This section explains the most common encodings you’ll see around GLWE/LWE in TFHE.

***

## 1) What is an encoding?

An **encoding** is simply:

> the rule you choose to represent a clear message (integer/bit/real) as a *plaintext value* inside $$\mathbb{Z}\_q$$ (or $$\mathcal{R}\_q$$for GLWE), before encryption.

It matters because:

* noise lives somewhere (in TFHE, noise is typically added in the **LSB**),
* so you place the message where noise won’t immediately destroy it,
* and you may want extra “headroom” so operations don’t overflow.

***

## 2) Encoding integers in the MSB

### The core idea

In TFHE, **noise is added in the LSB (least significant bits)**, so the message is placed in the **MSB (most significant bits)** to keep it far from noise.

We pick plaintext modulus $$p \le q$$ and define:

$$\Delta = \frac{q}{p}$$

To encode an integer message $$m \in \mathbb{Z}\_p$$, we place it as:

$$\text{plaintext} = \Delta \cdot m \in \mathbb{Z}\_q$$

Then encryption adds a small noise $$e$$ in the LSB, so inside the ciphertext you effectively have:

$$\Delta m + e$$

<figure><img src="/files/KeL2AEH1eh2zdn8m0yKH" alt=""><figcaption><p>*image credit: <a href="https://www.zama.org/post/tfhe-deep-dive-part-2">zama</a></p></figcaption></figure>

### Why this is useful

With MSB encoding, **leveled operations** like:

* homomorphic addition
* multiplication by a constant

work naturally **modulo** $$p$$, and decoding is done by rounding/dividing by $$\Delta$$.

<figure><img src="/files/j0x936imrKqLG8tXCIhP" alt=""><figcaption><p>*image credit: <a href="https://www.zama.org/post/tfhe-deep-dive-part-2">zama</a></p></figcaption></figure>

***

## 3) Encoding integers in the MSB with padding bits

### Why padding exists

Homomorphic operations can be “helped” by choosing an encoding that leaves space for growth. TFHE commonly uses **padding bits** (extra zeros) in the MSB region to provide safety margin (“headroom”) for leveled operations.

### How it’s defined

We still define $$\Delta = q/p$$, but instead of messages in $$\mathbb{Z}\_p$$, we encode messages in a smaller space:

$$m \in \mathbb{Z}\_{p'} \quad \text{with } p' < p$$

The gap between $$p'$$and $$p$$ is the **padding space**.

<figure><img src="/files/4G2nPkaTYxceLO15KS3Q" alt=""><figcaption><p>*image credit: <a href="https://www.zama.org/post/tfhe-deep-dive-part-2">zama</a></p></figcaption></figure>

### What padding buys you (the “consumption” idea)

Padding is especially practical for **exact** leveled operations.

* suppose you have **2 padding bits**
* if you add two ciphertexts, you can get the **exact** integer sum (not just mod $$p$$),
* and you may “consume” one padding bit in the process (you now have 1 padding bit left).

So padding behaves like a budget:

* more operations → less remaining padding → higher risk of overflow/wraparound.

<figure><img src="/files/46PSOPFqUReIJOneymnY" alt=""><figcaption><p>*image credit: <a href="https://www.zama.org/post/tfhe-deep-dive-part-2">zama</a></p></figcaption></figure>

***

## 4) Encoding of binaries in “GB mode”

This is a famous special case of **MSB + padding** used in **Gate Bootstrapping** workflows.

### What gets encoded?

Bits: $$m \in {0,1}$$

### What scale is used?

The encoding uses:

$$\Delta = \frac{q}{4}$$

which corresponds to **1 bit of padding**.

### Why that specific choice?

That single padding bit is used to perform an **exact linear combination** of input ciphertexts and constants *before* bootstrapping produces the final gate output. (You’ll revisit this when you study bootstrapping.)

***

## 5) Encoding of reals (approximate encoding)

This encoding is different from the “$$\Delta \cdot m$$ + noise” story.

### Core idea

For reals (in a fixed interval), the post explains:

* the message and error become “one thing”
* the message occupies the whole $$\mathbb{Z}\_q$$
* the **LSB are perturbed by noise** that represents approximation
* and there is **no** $$\Delta$$ to cleanly separate message and error

So instead of “exact recovery by rounding to $$\Delta$$-buckets”, you get:

* **approximate** recovery with a certain precision

<figure><img src="/files/uAQt8uYj9guPjKiSl0lA" alt=""><figcaption><p>*image credit: <a href="https://www.zama.org/post/tfhe-deep-dive-part-2">zama</a></p></figcaption></figure>

<figure><img src="/files/vRO1vz37iMMXKDxHWqXu" alt=""><figcaption><p>*image credit: <a href="https://www.zama.org/post/tfhe-deep-dive-part-2">zama</a></p></figcaption></figure>

### Why it’s useful

This encoding is practical for evaluating **approximate leveled operations** (additions and multiplications by constants) up to a desired precision.

### Decryption changes

The first step remains the same (remove the mask/secret contribution), but the second step becomes:

* rounding, or
* adding a new random error in the LSB (as described in the post).

***

## 6) Torus visualization (the “T” in TFHE)

The “T” in **TFHE** stands for **Torus**, a donut-shaped mathematical structure that provides an alternative way to visualize encodings.

* “bit layout” visualization: message in MSB, noise in LSB
* “torus” visualization: the same values are points on a circle like space, making wrapping/rounding intuition easier

<figure><img src="/files/1dsujLicxfXg2c6rZSHw" alt=""><figcaption><p>*image credit:<a href="https://fhe.org/meetups/003-tfhe-deep-dive">FHE.org</a></p></figcaption></figure>

***

## 7) Encodings in GLWE (how everything generalizes)

Up to now, many encoding examples are shown for **LWE** for simplicity. The post then explains how to generalize to GLWE:

* LWE encrypts a single value
* GLWE encrypts a polynomial $$M(X)$$ with $$N$$ coefficients (mod $$X^N+1$$)

To encode in **GLWE**, you simply apply the same encoding rule **to each coefficient** of the message polynomial.

So:

* “integer in MSB” for GLWE = each coefficient is MSB-encoded
* “MSB with padding” for GLWE = each coefficient has padding headroom
* “real encoding” for GLWE = each coefficient carries approximate information

<figure><img src="/files/T8fAhAxfaDKEcGqHvsdG" alt=""><figcaption><p>*image credit: <a href="https://www.zama.org/post/tfhe-deep-dive-part-2">zama</a></p></figcaption></figure>

***

## Quick mental checklist

When you choose an encoding, always ask:

1. Is my message **exact** (integers/bits) or **approximate** (reals)?
2. Do I need results **mod p**, or do I need **exact arithmetic** for a while (padding)?
3. How many operations will I do before refreshing (noise/padding budget)?


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://sabanaku77.gitbook.io/fhe-handbook-for-beginners/lwe-glwe-encoding.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
