# Interest rates model

How the borrow and the deposit interest rates are computed

Last updated

How the borrow and the deposit interest rates are computed

Last updated

Borrow Interest Rate

The borrow interest rate $i_{b_t}$is algorithmically computed for each pool and considering the parameters, defined in the Liquidity pools dynamics chapter: $U$, $U_{opt}$, $R_0$, $R_1$, $R_2$.

If $U_t$ < $U_{opt}$

$i_{b_t}=R_0+\frac{U_t}{U_{opt}} * R_1$

If $U_t$ โฅ $U_{opt}$

$i_{b_t}=R_0+R_1+\frac{U_t-U_{opt}}{1-U_{opt}}* R_2$

Deposit Interest Rate

It represents the distribution of the interest paid by borrowers, net of a fee retained to feed the Folks Finance community treasury. The distribution rate of the interest on deposits is also strongly influenced by the $U_t$ of the pool and the borrow interest rate $i_{b_t}$:

$i_{d_t}=U_t * i_{b_t} * (1-RR)$

ALGO Interest Rate

Considering the ALGO participation rewards obtained by holding the token in the wallet, the mathematics of interest and the deposit rates of the users that interact with the ALGO/fALGO pool are adjusted to take them into account.

The ALGO borrow interest rate is computed as follows:

For the same reason, the interest rate of the deposits has increased by the quote of rewards that the protocol pays to the user who deposited ALGOs in the liquidity pool.

The *retention rate *$RR$ represents the percentage of the revenue kept by the protocol from the interest paid by the borrowers. These protocol revenues are sent to the community treasury.

**If **$U_t$ **< **$U_{opt}$

$i^{Algo}_{b_t}=rewards + R_0+\frac{U_t}{U_{opt}} * R_1$

**If **$U_t$ **โฅ **$U_{opt}$

$i^{Algo}_{b_t}=rewards
+ R_0+R_1+\frac{U_t-U_{opt}}{1-U_{opt}}* R_2$

The $rewards$ factor corresponds to the percentage of rewards provided by the Algorand protocol, which would be paid directly to the user's wallet taking the loan if they had if he had left the ALGOs in their wallet (participating in the reward program). This increases the interest rate equal to the rewards percentage.

$i^{Algo}_{d_t}=rewards + U_t * (i_{b_t}- rewards) * (1-RR)$