You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
when retrieving/writing one of my documents which size is larger than usual(2.9MB) I get the following exception(my collection has a partition key defined):
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 8.0 failed 4 times, most recent failure: Lost task 0.3 in stage 8.0 (TID 246, 10.10.42.5, executor 1): java.lang.Exception: Errors encountered in bulk import API execution. PartitionKeyDefinition: {"paths":["/key/businessUnit/id"],"kind":"Hash"}, Number of failures corresponding to exception of type: com.microsoft.azure.documentdb.DocumentClientException = 1. The failed import docs are:
thanks in advance for the help
The text was updated successfully, but these errors were encountered:
I'm trying to do a bulk import in one spark job using cosmodb bulk conf:
when retrieving/writing one of my documents which size is larger than usual(2.9MB) I get the following exception(my collection has a partition key defined):
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 8.0 failed 4 times, most recent failure: Lost task 0.3 in stage 8.0 (TID 246, 10.10.42.5, executor 1): java.lang.Exception: Errors encountered in bulk import API execution. PartitionKeyDefinition: {"paths":["/key/businessUnit/id"],"kind":"Hash"}, Number of failures corresponding to exception of type: com.microsoft.azure.documentdb.DocumentClientException = 1. The failed import docs are:
thanks in advance for the help
The text was updated successfully, but these errors were encountered: