-
Notifications
You must be signed in to change notification settings - Fork 175
DIRAC v6r13
No specific changes to allow on DIRAC side, but you may need to restart your servers for the IPv6 stack to be enabled.
Work still ongoing.
There are some mandatory changes in the CS structure, even if you choose to keep using FTS2.
- In the DataManagement section of Operations, a new flag is needed : 'FTSVersion', whose value can be 'FTS2' (default) or 'FTS3'
- Still in Operations/[default or setup]/DataManagement/ a new nested section needs to be created:
FTSPlacement
{
FTS2
{
...
}
FTS3
{
# How to choose the FTS server. It can be:
# Random : choose random from the list
# Sequence : one after the other
# Failover : always use the first one, goes to the next if problem
ServerPolicy = Random
}
}
- The section Systems/DataManagement/Services/FTSManager/FTSStrategy can be removed, and its content moved to the previously created section Operations/[default or setup]/DataManagement/FTSPlacement/FTS2
- The section /Resources/FTSEndpoints also needs to be divided in FTS2 and FTS3. The previous list of servers can go in FTS2. BEWARE: the FTS3 servers need to point on the REST API port (default 8446 )
- In Systems/DataManagement/Agents/FTSAgents, the attribute FTSGraphValidityPeriod is removed, and the attribute RWAccessValidityPeriod is replaced with FTSPlacementValidityPeriod
The FTS3 rest API release is needed in the externals, but is not yet deployed. For testing, you can get it here https://github.com/cern-it-sdc-id/fts3-rest/tree/master
The new RMS is now using SQLAlchemy as a backend. This specific change is fully transparent. However, he uniqueness of the RequestName is not enforced anymore, and this requires two actions :
- All the interfaces of the ReqClient have been changed to using RequestID instead of RequestName in the method signatures. Please change the code accordingly. The names remained the same except for
- getRequestNamesList -> getRequestIDsList
- getRequestNamesForJob -> getRequestIDsForJob
- Existing databases have to be altered :
ALTER TABLE Request DROP KEY RequestName;
In order to deal better with SE in downtime or similar events, an extra field needs to be added to the Request table:
ALTER TABLE Request ADD COLUMN NotBefore DATETIME DEFAULT NULL;
Finally, we also rely on the cascade delete feature :
ALTER TABLE Operation DROP FOREIGN KEY Operation_ibfk_1;
ALTER TABLE Operation
ADD CONSTRAINT Operation_ibfk_1
FOREIGN KEY (`RequestID`) REFERENCES `Request` (`RequestID`)
ON DELETE CASCADE;
ALTER TABLE File DROP FOREIGN KEY File_ibfk_1;
ALTER TABLE File
ADD CONSTRAINT File_ibfk_1
FOREIGN KEY (`OperationID`) REFERENCES `Operation` (`OperationID`)
ON DELETE CASCADE;
Two new CS field avoid to hard code the registration protocols and third party protocols at several places:
/Resources/FileCatalogs/RegistrationProtocols
/Resources/FileCatalogs/ThirdPartyProtocols
The default value is ['srm', 'dips'] for both. Two Helpers are available
The DataLogging service, as it was, won't be populated any more. New installations won't install it. A replacement will come in a future release.
There is a new script (dirac-admin-add-shifter) for adding of modifying the list of shifters in CS
https://github.com/DIRACGrid/DIRAC/pull/2259 introduces plugins for TaskManager
- The method "generateTasks" of transformation plugins has been renamed simply "run" (this may affect extensions).
- The plugins for TaskManager are 2, and define how the list of destination sites is created:
- "BySE"
- "ByJobType". By default nothing changes, as for using a plugin that is not "BySE" VOs need to explicitely set a CS entry (/Operations//Transformations/DestinationPlugin) and at the moment the value can only be "ByJobType" other then "BySE".
If "ByJobType" is selected, the CS section "JobTypeMapping" in Operations have to be present. A list of exclusions and allowance can define how each and every job can be routed based on its type. A proper configuration can allow any kind of computing model. Special flags like "ALL" can be specified.
By default, all sites are allowed to do every job. The actual rules are freely specified in the Operation JobTypeMapping section. The content of the section may look like this:
User
{
Exclude = PAK
Exclude += Ferrara
Exclude += Bologna
Exclude += Paris
Exclude += CERN
Exclude += IN2P3
Allow
{
Paris = IN2P3
CERN = CERN
IN2P3 = IN2P3
}
}
DataReconstruction
{
Exclude = PAK
Exclude += Ferrara
Exclude += CERN
Exclude += IN2P3
Allow
{
Ferrara = CERN
CERN = CERN
IN2P3 = IN2P3
IN2P3 += CERN
}
}
Merge
{
Exclude = ALL
Allow
{
CERN = CERN
IN2P3 = IN2P3
}
}
The sites in the exclusion list will be removed. The allow section says where each site may help another site.
Introduction of a static monitoring system consisting of a database (InstalledComponentsDB) and a service (ComponentMonitoring) for keeping track of which components (services, databases, agents ...) are installed and where (which host), as well as the date and time of un/installation. The database should be installed before the service. Upon installation of the database and service, both will be registered in the monitoring system. Any component's un/installation after that point will be automatically recorded by the monitoring system.
New script dirac-populate-component-db allows to populate the newly introduced monitoring system with all the currently installed components in DIRAC. This script should be used only once, after installing the monitoring service, as it will create duplicate entries in the monitoring system after every use.
The base database class DB does not take any more the maxQueueSize argument used before for managing the internal pool of database connections. Now we have switched to a database connection per service thread. As a consequence, all the database classes inheriting the base DB class must not use the obsoleted maxQueueSize argument in their constructors, and hence should not pass this argument to the constructor of the superclass.