Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model chunking support #404

Merged
merged 8 commits into from
Sep 28, 2023
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions doc/changelog.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,17 @@ To be released at some future point in time

Description

- Added support for model chunking
- Updated the third-party RedisAI component
- Updated the third-party lcov component

Detailed Notes

- Models will now be automatically chunked when sent to/received from the backed database. This allows use of models greater than 511MB in size. (PR404_)
- Updated from RedisAI v1.2.3 (test target)/v1.2.4 and v1.2.5 (CI/CD pipeline) to v1.2.7 (PR402_)
- Updated lcov from version 1.15 to 2.0 (PR396_)

.. _PR404: https://github.com/CrayLabs/SmartRedis/pull/404
.. _PR402: https://github.com/CrayLabs/SmartRedis/pull/402
.. _PR396: https://github.com/CrayLabs/SmartRedis/pull/396

Expand Down
13 changes: 13 additions & 0 deletions include/client.h
Original file line number Diff line number Diff line change
Expand Up @@ -1269,6 +1269,19 @@ class Client : public SRObject
const int start_index,
const int end_index);

/*!
* \brief Reconfigure the chunking size that Redis uses for model
* serialization, replication, and the model_get command.
* \details This method triggers the AI.CONFIG method in the Redis
* database to change the model chunking size. The default
* size of 511MB should be fine for most applications, so
* it is expected to be very rare that a client calls this
* method.
* \param chunk_size The new chunk size
billschereriii marked this conversation as resolved.
Show resolved Hide resolved
* \throw SmartRedis::Exception if the command fails.
*/
void set_model_chunk_size(int chunk_size);

/*!
* \brief Create a string representation of the client
* \returns A string containing client details
Expand Down
15 changes: 15 additions & 0 deletions include/command.h
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,21 @@ class Command
return *this;
}

/*!
* \brief Add a vector of string_views to the command.
* \details The string values are copied to the command.
* To add a vector of keys, use the add_keys()
* method.
* \param fields The strings to add to the command
* \returns The command object, for chaining.
*/
virtual Command& operator<<(const std::vector<std::string_view>& fields) {
for (size_t i = 0; i < fields.size(); i++) {
add_field_ptr(fields[i]);
}
return *this;
}

/*!
* \brief Add a vector of strings to the command.
* \details The string values are copied to the command.
Expand Down
8 changes: 8 additions & 0 deletions include/commandreply.h
Original file line number Diff line number Diff line change
Expand Up @@ -267,6 +267,14 @@ class CommandReply {
*/
std::string redis_reply_type();

/*!
* \brief Determine whether the response is an array
* \returns true iff the response is of type REDIS_REPLY_ARRAY
*/
bool is_array() {
return _reply->type == REDIS_REPLY_ARRAY;
}

/*!
* \brief Print the reply structure of the CommandReply
*/
Expand Down
13 changes: 13 additions & 0 deletions include/pyclient.h
Original file line number Diff line number Diff line change
Expand Up @@ -925,6 +925,19 @@ class PyClient : public PySRObject
const int start_index,
const int end_index);

/*!
* \brief Reconfigure the chunking size that Redis uses for model
* serialization, replication, and the model_get command.
* \details This method triggers the AI.CONFIG method in the Redis
* database to change the model chunking size. The default
* size of 511MB should be fine for most applications, so
* it is expected to be very rare that a client calls this
* method.
* \param chunk_size The new chunk size
billschereriii marked this conversation as resolved.
Show resolved Hide resolved
* \throw SmartRedis::Exception if the command fails.
*/
void set_model_chunk_size(int chunk_size);

/*!
* \brief Create a string representation of the Client
* \returns A string representation of the Client
Expand Down
27 changes: 23 additions & 4 deletions include/redis.h
Original file line number Diff line number Diff line change
Expand Up @@ -276,7 +276,7 @@ class Redis : public RedisServer
* \brief Set a model from std::string_view buffer in the
* database for future execution
* \param key The key to associate with the model
* \param model The model as a continuous buffer string_view
* \param model The model as a sequence of buffer string_view chunks
* \param backend The name of the backend
* (TF, TFLITE, TORCH, ONNX)
* \param device The name of the device for execution
Expand All @@ -292,7 +292,7 @@ class Redis : public RedisServer
* \throw RuntimeException for all client errors
*/
virtual CommandReply set_model(const std::string& key,
std::string_view model,
const std::vector<std::string_view>& model,
const std::string& backend,
const std::string& device,
int batch_size = 0,
Expand All @@ -307,7 +307,7 @@ class Redis : public RedisServer
* \brief Set a model from std::string_view buffer in the
* database for future execution in a multi-GPU system
* \param name The name to associate with the model
* \param model The model as a continuous buffer string_view
* \param model The model as a sequence of buffer string_view chunks
* \param backend The name of the backend
* (TF, TFLITE, TORCH, ONNX)
* \param first_gpu The first GPU to use with this model
Expand All @@ -322,7 +322,7 @@ class Redis : public RedisServer
* \throw RuntimeException for all client errors
*/
virtual void set_model_multigpu(const std::string& name,
const std::string_view& model,
const std::vector<std::string_view>& model,
const std::string& backend,
int first_gpu,
int num_gpus,
Expand Down Expand Up @@ -505,6 +505,25 @@ class Redis : public RedisServer
const std::string& key,
const bool reset_stat);

/*!
* \brief Retrieve the current model chunk size
* \returns The size in bytes for model chunking
*/
virtual int get_model_chunk_size();

/*!
* \brief Reconfigure the chunking size that Redis uses for model
* serialization, replication, and the model_get command.
* \details This method triggers the AI.CONFIG method in the Redis
* database to change the model chunking size. The default
* size of 511MB should be fine for most applications, so
* it is expected to be very rare that a client calls this
* method.
* \param chunk_size The new chunk size
* \throw SmartRedis::Exception if the command fails.
*/
virtual void set_model_chunk_size(int chunk_size);

/*!
* \brief Run a CommandList via a Pipeline
* \param cmdlist The list of commands to run
Expand Down
26 changes: 22 additions & 4 deletions include/rediscluster.h
Original file line number Diff line number Diff line change
Expand Up @@ -294,7 +294,7 @@ class RedisCluster : public RedisServer
* \brief Set a model from std::string_view buffer in the
* database for future execution
* \param key The key to associate with the model
* \param model The model as a continuous buffer string_view
* \param model The model as a sequence of buffer string_view chunks
* \param backend The name of the backend
* (TF, TFLITE, TORCH, ONNX)
* \param device The name of the device for execution
Expand All @@ -312,7 +312,7 @@ class RedisCluster : public RedisServer
* \throw RuntimeException for all client errors
*/
virtual CommandReply set_model(const std::string& key,
std::string_view model,
const std::vector<std::string_view>& model,
const std::string& backend,
const std::string& device,
int batch_size = 0,
Expand All @@ -327,7 +327,7 @@ class RedisCluster : public RedisServer
* \brief Set a model from std::string_view buffer in the
* database for future execution in a multi-GPU system
* \param name The name to associate with the model
* \param model The model as a continuous buffer string_view
* \param model The model as a sequence of buffer string_view chunks
* \param backend The name of the backend
* (TF, TFLITE, TORCH, ONNX)
* \param first_gpu The first GPU to use with this model
Expand All @@ -344,7 +344,7 @@ class RedisCluster : public RedisServer
* \throw RuntimeException for all client errors
*/
virtual void set_model_multigpu(const std::string& name,
const std::string_view& model,
const std::vector<std::string_view>& model,
const std::string& backend,
int first_gpu,
int num_gpus,
Expand Down Expand Up @@ -527,6 +527,11 @@ class RedisCluster : public RedisServer
get_model_script_ai_info(const std::string& address,
const std::string& key,
const bool reset_stat);
/*!
* \brief Retrieve the current model chunk size
* \returns The size in bytes for model chunking
*/
virtual int get_model_chunk_size();

/*!
* \brief Run a CommandList via a Pipeline.
Expand Down Expand Up @@ -741,6 +746,19 @@ class RedisCluster : public RedisServer
std::vector<std::string>& inputs,
std::vector<std::string>& outputs);

/*!
* \brief Reconfigure the chunking size that Redis uses for model
* serialization, replication, and the model_get command.
* \details This method triggers the AI.CONFIG method in the Redis
* database to change the model chunking size. The default
* size of 511MB should be fine for most applications, so
* it is expected to be very rare that a client calls this
* method.
* \param chunk_size The new chunk size
* \throw SmartRedis::Exception if the command fails.
*/
virtual void set_model_chunk_size(int chunk_size);

/*!
* \brief Execute a pipeline for the provided commands.
* The provided commands MUST be executable on a single
Expand Down
46 changes: 42 additions & 4 deletions include/redisserver.h
Original file line number Diff line number Diff line change
Expand Up @@ -277,7 +277,7 @@ class RedisServer {
* \brief Set a model from std::string_view buffer in the
* database for future execution
* \param key The key to associate with the model
* \param model The model as a continuous buffer string_view
* \param model The model as a sequence of buffer string_view chunks
* \param backend The name of the backend
* (TF, TFLITE, TORCH, ONNX)
* \param device The name of the device for execution
Expand All @@ -295,7 +295,7 @@ class RedisServer {
* \throw RuntimeException for all client errors
*/
virtual CommandReply set_model(const std::string& key,
std::string_view model,
const std::vector<std::string_view>& model,
const std::string& backend,
const std::string& device,
int batch_size = 0,
Expand All @@ -311,7 +311,7 @@ class RedisServer {
* \brief Set a model from std::string_view buffer in the
* database for future execution in a multi-GPU system
* \param name The name to associate with the model
* \param model The model as a continuous buffer string_view
* \param model The model as a sequence of buffer string_view chunks
* \param backend The name of the backend
* (TF, TFLITE, TORCH, ONNX)
* \param first_gpu The first GPU to use with this model
Expand All @@ -328,7 +328,7 @@ class RedisServer {
* \throw RuntimeException for all client errors
*/
virtual void set_model_multigpu(const std::string& name,
const std::string_view& model,
const std::vector<std::string_view>& model,
const std::string& backend,
int first_gpu,
int num_gpus,
Expand Down Expand Up @@ -520,6 +520,33 @@ class RedisServer {
const std::string& key,
const bool reset_stat) = 0;

/*!
* \brief Retrieve the current model chunk size
* \returns The size in bytes for model chunking
*/
virtual int get_model_chunk_size() = 0;

/*!
* \brief Reconfigure the chunking size that Redis uses for model
* serialization, replication, and the model_get command.
* \details This method triggers the AI.CONFIG method in the Redis
* database to change the model chunking size. The default
* size of 511MB should be fine for most applications, so
* it is expected to be very rare that a client calls this
* method.
* \param chunk_size The new chunk size
billschereriii marked this conversation as resolved.
Show resolved Hide resolved
* \throw SmartRedis::Exception if the command fails.
*/
virtual void set_model_chunk_size(int chunk_size) = 0;

/*!
* \brief Store the current model chunk size
* \param chunk_size The updated model chunk size
*/
virtual void store_model_chunk_size(int chunk_size) {
_model_chunk_size = chunk_size;
}

/*!
* \brief Run a CommandList via a Pipeline. For clustered databases
* all commands must go to the same shard
Expand Down Expand Up @@ -567,6 +594,12 @@ class RedisServer {
*/
int _command_attempts;

/*!
* \brief The chunk size into which models need to be broken for
* transfer to Redis
*/
int _model_chunk_size;

/*!
* \brief Default value of connection timeout (seconds)
*/
Expand Down Expand Up @@ -630,6 +663,11 @@ class RedisServer {
*/
bool _is_domain_socket;

/*!
* \brief Default model chunk size
*/
static constexpr int _UNKNOWN_MODEL_CHUNK_SIZE = -1;

/*!
* \brief Environment variable for connection timeout
*/
Expand Down
Loading