Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update expensify_prod branch #2034

Merged
merged 6 commits into from
Dec 18, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions BedrockServer.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1823,6 +1823,7 @@ atomic<bool> __quiesceShouldUnlock(false);
thread* __quiesceThread = nullptr;

void BedrockServer::_control(unique_ptr<BedrockCommand>& command) {
SINFO("Received control command: " << command->request.methodLine);
SData& response = command->response;
string reason = "MANUAL";
response.methodLine = "200 OK";
Expand Down Expand Up @@ -1913,7 +1914,9 @@ void BedrockServer::_control(unique_ptr<BedrockCommand>& command) {
if (dbPoolCopy) {
SQLiteScopedHandle dbScope(*_dbPool, _dbPool->getIndex());
SQLite& db = dbScope.db();
SINFO("[quiesce] Exclusive locking DB");
db.exclusiveLockDB();
SINFO("[quiesce] Exclusive locked DB");
locked = true;
while (true) {
if (__quiesceShouldUnlock) {
Expand All @@ -1936,12 +1939,16 @@ void BedrockServer::_control(unique_ptr<BedrockCommand>& command) {
response.methodLine = "200 Blocked";
}
} else if (SIEquals(command->request.methodLine, "UnblockWrites")) {
SINFO("[quiesce] Locking __quiesceLock");
lock_guard lock(__quiesceLock);
SINFO("[quiesce] __quiesceLock locked");
if (!__quiesceThread) {
response.methodLine = "200 Not Blocked";
} else {
__quiesceShouldUnlock = true;
SINFO("[quiesce] Joining __quiesceThread");
__quiesceThread->join();
SINFO("[quiesce] __quiesceThread joined");
delete __quiesceThread;
__quiesceThread = nullptr;
response.methodLine = "200 Unblocked";
Expand Down
5 changes: 5 additions & 0 deletions sqlitecluster/SQLite.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -331,13 +331,17 @@ void SQLite::exclusiveLockDB() {
// writes in this case.
// So when these are both locked by the same thread at the same time, `commitLock` is always locked first, and we do it the same way here to avoid deadlocks.
try {
SINFO("Locking commitLock");
_sharedData.commitLock.lock();
SINFO("commitLock Locked");
} catch (const system_error& e) {
SWARN("Caught system_error calling _sharedData.commitLock, code: " << e.code() << ", message: " << e.what());
throw;
}
try {
SINFO("Locking writeLock");
_sharedData.writeLock.lock();
SINFO("writeLock Locked");
} catch(const system_error& e) {
SWARN("Caught system_error calling _sharedData.writeLock, code: " << e.code() << ", message: " << e.what());
throw;
Expand Down Expand Up @@ -673,6 +677,7 @@ bool SQLite::prepare(uint64_t* transactionID, string* transactionhash) {
static const size_t deleteLimit = 10;
if (minJournalEntry < oldestCommitToKeep) {
auto startUS = STimeNow();
shared_lock<shared_mutex> lock(_sharedData.writeLock);
string query = "DELETE FROM " + _journalName + " WHERE id < " + SQ(oldestCommitToKeep) + " LIMIT " + SQ(deleteLimit);
SASSERT(!SQuery(_db, "Deleting oldest journal rows", query));
size_t deletedCount = sqlite3_changes(_db);
Expand Down
7 changes: 6 additions & 1 deletion sqlitecluster/SQLiteNode.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -493,7 +493,12 @@ void SQLiteNode::_sendOutstandingTransactions(const set<uint64_t>& commitOnlyIDs
}

list<STable> SQLiteNode::getPeerInfo() const {
shared_lock<decltype(_stateMutex)> sharedLock(_stateMutex);
// This does not lock _stateMutex. It follows the rule in `SQLiteNode.h` that says:
// * Alternatively, a public `const` method that is a simple getter for an atomic property can skip the lock.
// peer->getData is atomic internally, so we can treat `peer->getData()` as a simple getter for an atomic property.
// _peerList is also `const` and so we can iterate this list safely regardless of the lock.
// This makes this function a slightly more complex getter for an atomic property, but it's still safe to skip
// The state lock here.
list<STable> peerData;
for (SQLitePeer* peer : _peerList) {
peerData.emplace_back(peer->getData());
Expand Down
6 changes: 3 additions & 3 deletions sqlitecluster/SQLiteNode.h
Original file line number Diff line number Diff line change
Expand Up @@ -25,12 +25,12 @@
* Rules for maintaining SQLiteNode methods so that atomicity works as intended.
*
* No non-const members should be publicly exposed.
* Any public method that is `const` must shared_lock<>(nodeMutex).
* Any public method that is `const` must shared_lock<>(_stateMutex).
* Alternatively, a public `const` method that is a simple getter for an atomic property can skip the lock.
* Any public method that is non-const must unique_lock<>(nodeMutex) before changing any internal state, and must hold
* Any public method that is non-const must unique_lock<>(_stateMutex) before changing any internal state, and must hold
* this lock until it is done changing state to make this method's changes atomic.
* Any private methods must not call public methods.
* Any private methods must not lock nodeMutex (for recursion reasons).
* Any private methods must not lock _stateMutex (for recursion reasons).
* Any public methods must not call other public methods.
*
* `_replicate` is a special exception because it runs in multiple threads internally. It needs to handle locking if it
Expand Down
1 change: 1 addition & 0 deletions sqlitecluster/SQLitePeer.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -225,6 +225,7 @@ void SQLitePeer::getCommit(uint64_t& count, string& hashString) const {
}

STable SQLitePeer::getData() const {
lock_guard<decltype(peerMutex)> lock(peerMutex);
// Add all of our standard stuff.
STable result({
{"name", name},
Expand Down