You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our team is developing notebook databricks using SPARK and SCALA. We are working on inserting data in Collections in CosmosDB.
Following multiple guides, we used the configuration field: "spark.cosmos.throughputControl.globalControl.container" = collection for throughput
"spark.cosmos.throughputControl.targetThroughputThreshold" = 0.2
to limit the use of the RU's in Cosmos but we are noticing that after multiple executions it looks like the library is not considering the limitation we set.
What we are saying is that it is like the limitation is ignored in fact the RU usage can grow up untill 100%. We are opening this issue after Microsoft suggested it.
Thanks for your feedback
The text was updated successfully, but these errors were encountered:
Our team is developing notebook databricks using SPARK and SCALA. We are working on inserting data in Collections in CosmosDB.
Following multiple guides, we used the configuration field:
"spark.cosmos.throughputControl.globalControl.container" = collection for throughput
"spark.cosmos.throughputControl.targetThroughputThreshold" = 0.2
to limit the use of the RU's in Cosmos but we are noticing that after multiple executions it looks like the library is not considering the limitation we set.
What we are saying is that it is like the limitation is ignored in fact the RU usage can grow up untill 100%. We are opening this issue after Microsoft suggested it.
Thanks for your feedback
The text was updated successfully, but these errors were encountered: