diff --git a/preview/pr-29/2022/10/17/Use-Linting-Tools-to-Save-Time.html b/preview/pr-29/2022/10/17/Use-Linting-Tools-to-Save-Time.html deleted file mode 100644 index ff5e05bca5..0000000000 --- a/preview/pr-29/2022/10/17/Use-Linting-Tools-to-Save-Time.html +++ /dev/null @@ -1,684 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Tip of the Week: Use Linting Tools to Save Time | Software Engineering Team - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - Software Engineering Team - - - CU Dept. of Biomedical Informatics - - - - - - - - -
- -
- - - - - - - - - - - - - -
-

Tip of the Week: Use Linting Tools to Save Time

- - - - - - - - -
- - - - - -
- - - -

Tip of the Week: Use Linting Tools to Save Time

- -
- - -
- -

Each week we seek to provide a software tip of the week geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out to -#software-engineering on Slack or email DBMISoftwareEngineering at olucdenver.onmicrosoft.com

- -
-
- - - -

Have you ever found yourself spending hours formatting your code so it looks just right? Have you ever caught a duplicative import statement in your code? We recommend using open source linting tools to help avoid common issues like these and save time.

- - - -

Software Linting is the practice of detecting and sometimes automatically fixing stylistic, syntactical, or other programmatic issues. Linting usually involves installing standardized or opinionated libraries which allow you to quickly make code corrections. Using linting tools also can help you learn nuanced or unwritten intricacies of programming languages while you solve problems in your work.

- -

TLDR (too long, didn’t read); Linting is a type of static analysis which can be used to instantly address many common code issues. isort provides automatic Python import statement linting. pre-commit provides an easy way to test and apply isort (in addition to other linting tools) through source control workflows.

- -

Example: Python Code Style Linting with isort

- -

Isort is a Python utility for linting package import statements (sorting, deduplication, etc). Isort may be used to automatically fix your import statements or test for their consistency. See the isort installation documentation for more information on getting started.

- -

Before isort

- -

The following Python code shows a series of import statements. There are duplicate imports and the imports are a mixture of custom (possibly local), external, and built-in packages. Isort can check this code using the command: isort <file or path> --check.

- -
from custompkg import b, a
-import numpy as np
-import pandas as pd
-import sys
-import os
-import pandas as pd
-import os
-
- -

After isort

- -

Isort can fix the code automatically using the command: isort <file or path>. After applying the fixes, notice that all packages are alphabetized and grouped by built-in, external, and custom packages.

- -
import os
-import sys
-
-import numpy as np
-import pandas as pd
-from custompkg import a, b
-
- -

Using isort with pre-commit

- -

Pre-commit is a framework which can be used to apply linting checks and fixes as git-hooks or the command line. Pre-commit includes existing hooks for many libraries, including isort. See the pre-commit installation documentation to get started.

- -

Example .pre-commit-config.yaml Configuration

- -

The following yaml content can be used to reference isort by pre-commit. This configuration content can be expanded to many different pre-commit hooks.

- -
# example .pre-commit-config.yaml file leveraging isort
-# See https://pre-commit.com/hooks.html for more hooks
----
-repos:
-  - repo: https://github.com/PyCQA/isort
-    rev: 5.10.1
-    hooks:
-      - id: isort
-
- -

Example Using pre-commit Manually

- -

Imagine we have a file, example.py, which includes the content from Before isort. Running pre-commit manually on the directory files will first automatically apply isort formatting. The second time pre-commit is run there will be no issue (pre-commit resolved it automatically).

- -

First detecting and fixing the file:

- -
% pre-commit run --all-files
-isort...................................Failed
-- hook id: isort
-- files were modified by this hook
-
-Fixing example.py
-
- -

Then checking that the file was fixed:

- -
% pre-commit run --all-files
-isort...................................Passed
-
-
- - - - - -
- - - -
- - - - - - Next post
- - Tip of the Week: Diagrams as Code - - -
-
-
- - -
- - - - - - - diff --git a/preview/pr-29/2022/11/27/Diagrams-as-Code.html b/preview/pr-29/2022/11/27/Diagrams-as-Code.html deleted file mode 100644 index 95e75b1b80..0000000000 --- a/preview/pr-29/2022/11/27/Diagrams-as-Code.html +++ /dev/null @@ -1,756 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Tip of the Week: Diagrams as Code | Software Engineering Team - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - Software Engineering Team - - - CU Dept. of Biomedical Informatics - - - - - - - - -
- -
- - - - - - - - - - - - - -
-

Tip of the Week: Diagrams as Code

- - - - - - - - -
- - - - - -
- - - -

Tip of the Week: Diagrams as Code

- -
- - -
- -

Each week we seek to provide a software tip of the week geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out to -#software-engineering on Slack or email DBMISoftwareEngineering at olucdenver.onmicrosoft.com

- -
-
- - - -

Diagrams can be a useful way to illuminate and communicate ideas. Free-form drawing or drag and drop tools are one common way to create diagrams. With this tip of the week we introduce another option: diagrams as code (DaC), or creating diagrams by using code.

- - - -

TLDR (too long, didn’t read); -Diagrams as code (DaC) tools provide an advantage for illustrating concepts by enabling quick visual positioning, source controllable input, portability (both for input and output formats), and open collaboration through reproducibility. Consider using Mermaid (as well as many other DaC tools) to assist your diagramming efforts which can be used directly, within in your markdown files, or Github commentary using code blocks (for example, ` ```mermaid `).

- -

Example Mermaid Diagram as Code

- -
- - -
-
flowchart LR
-    a --> b
-    b --> c
-    c --> d1
-    c --> d2
-
- -

Mermaid code

-
- - -
-
-flowchart LR
-    a --> b
-    b --> c
-    c --> d1
-    c --> d2
-
-

Mermaid rendered -

- -
- -
- -

The above shows an example mermaid flowchart code and its rendered output. The syntax is specific to mermaid and acts as a simple coding language to help you depict ideas. Mermaid also includes options for sequence, class, Gantt, and other diagram types. Mermaid provides a live editor which can be used to quickly draft and share content.

- -

Mermaid Github Integration

- -
- - -
-
- - Github comment - - -
- Github comment - -
- -
- -
- - -
-
- - Github comment preview - - -
- Github comment preview - -
- -
- -
- -
- -

Mermaid diagrams may be rendered directly from markdown (.md) and text communication content (like pull request or issue comments) within Github. See Github’s blog post on mermaid for more details covering this topic.

- -

Mermaid Jupyter Notebook Integration

- -
- - Mermaid content rendered in a Jupyter notebook - - -
- Mermaid content rendered in a Jupyter notebook - -
- -
- -

Mermaid diagrams can be rendered directly within Jupyter notebooks with a small amount of additional code and a rendering service. One way to render mermaid and other diagrams within notebooks is to use Kroki.io. See this example for an interactive demonstration.

- -

Version Controlling Your Diagrams

- -
-graph LR
-    subgraph Compose
-      write[Write Diagram Code]
-      render[Render Diagram]
-    end
-    subgraph Store[Save and Share]
-      save[Upload Diagram]
-    end
-    write --> | create | render
-    render --> | revise | write
-    render --> | code and exports | save
-
-

Mermaid version control workflow example

- -

Creating your diagrams with code means you can enable reproducible and collaborative work on version control systems (like git). Using git in this way allows you to reference and remix your diagrams as part of development. It also allows others to collaborate on diagrams together making modifications as needed.

- -

Additional Resources

- -

Please see the following the additional resources which are related to diagrams as code.

- - -
- - - - - -
- - - -
- - - Previous post
- - Tip of the Week: Use Linting Tools to Save Time - - -
- - - Next post
- - Tip of the Week: Data Engineering with SQL, Arrow and DuckDB - - -
-
-
- - -
- - - - - - - diff --git a/preview/pr-29/2022/12/05/Data-Engineering-with-SQL-Arrow-and-DuckDB.html b/preview/pr-29/2022/12/05/Data-Engineering-with-SQL-Arrow-and-DuckDB.html deleted file mode 100644 index 5df7605157..0000000000 --- a/preview/pr-29/2022/12/05/Data-Engineering-with-SQL-Arrow-and-DuckDB.html +++ /dev/null @@ -1,738 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Tip of the Week: Data Engineering with SQL, Arrow and DuckDB | Software Engineering Team - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - Software Engineering Team - - - CU Dept. of Biomedical Informatics - - - - - - - - -
- -
- - - - - - - - - - - - - -
-

Tip of the Week: Data Engineering with SQL, Arrow and DuckDB

- - - - - - - - -
- - - - - -
- - - -

Tip of the Week: Data Engineering with SQL, Arrow and DuckDB

- -
- - -
- -

Each week we seek to provide a software tip of the week geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out to -#software-engineering on Slack or email DBMISoftwareEngineering at olucdenver.onmicrosoft.com

- -
-
- - - -

Apache Arrow is a language-independent and high performance data format useful in many scenarios. DuckDB is an in-process SQL-based data management system which is Arrow-compatible. In addition to providing a SQLite-like database format, DuckDB also provides a standardized and high performance way to work with Arrow data where otherwise one may be forced to language-specific data structures or transforms.

- - - -

TLDR (too long, didn’t read); -DuckDB may be used to access and transform Arrow-based data from multiple data formats through SQL. Using Arrow and DuckDB provides a cross-language way to access and manage data. Data development with these tools may also enable improvements in performance, understandability, or long term maintainability of your code.

- -

Reduce Wasted Conversion Effort with Arrow

- -
-flowchart TB
-    Python:::outlined <--> Arrow
-    R:::outlined <--> Arrow
-    C++:::outlined <--> Arrow
-    Java:::outlined <--> Arrow
-    others...:::outlined <--> Arrow
-
-    classDef outlined fill:#fff,stroke:#333
-
- - -

Arrow provides a multi-language data format which prevents you from needing to convert to other formats when dealing with multiple in-memory or serialized data formats. For example, this means that a Python and an R package may use the same in-memory or file-based data without conversion (where normally a Python Pandas dataframe and R data frame may require a conversion step in between).

- -
-flowchart TB
-    subgraph Python
-      Pandas:::outlined
-      Polars:::outlined
-      dict[Python dict]:::outlined
-      list[Python list]:::outlined
-    end
-
-    Pandas <--> Arrow
-    Polars <--> Arrow
-    dict <--> Arrow
-    list <--> Arrow
-
-  classDef outlined fill:#fff,stroke:#333
-
- -

The same stands for various libraries within one language - Arrow enables interchange between various language library formats (for example, a Python Pandas dataframe and Python dictionary are two distinct in-memory formats which may require conversions). Conversions to or from these formats can involve data type or other inferences which are costly to productivity. You can save time and effort by avoiding conversions using Arrow.

- -

Using SQL to Join or Transform Arrow Data via DuckDB

- -
-flowchart LR
-    subgraph duckdb["DuckDB Processing"]
-        direction BT
-        SQL[SQL] --> DuckDB[DuckDB Client]
-    end
-    parquet1[example.parquet] --> duckdb
-    sqlite[example.sqlite] --> duckdb
-    csv[example.csv] --> duckdb
-    arrow["in-memory Arrow"] --> duckdb
-    pandas["in-memory Pandas"] --> duckdb
-    duckdb --> Arrow
-    Arrow --> Other[Other work...]
-
- -

DuckDB provides a management client and relational database format (similar to SQLite databases) which may be handled with Arrow. SQL may be used with the DuckDB client to filter, join, or change various data types. Due to Arrow’s cross-language properties, there is no additional cost to using SQL through DuckDB to return data for implementation within other purpose-built data formats. DuckDB provides client API’s in many languages (for example, Python, R, and C++), making it possible to write DuckDB client code with SQL to manage data without having to use manually written sub-procedures.

- -
-flowchart TB
-  subgraph duckdb["DuckDB Processing"]
-        direction BT
-        SQL[SQL] --> DuckDB[DuckDB Client]
-    end
-    Python:::outlined <--> duckdb
-    R:::outlined <--> duckdb
-    C++:::outlined <--> duckdb
-    Java:::outlined <--> duckdb
-    others...:::outlined <--> duckdb
-    duckdb <--> Arrow
-
-    classDef outlined fill:#fff,stroke:#333
-
- -

Using SQL to perform these operations with Arrow provides an opportunity for your data code to be used (or understood) within other languages without additional rewrites. SQL also provides you access to roughly 48 years worth of data management improvements without being constrained by imperative language data models or schema (reference: SQL Wikipedia: First appeared: 1974).

- -

Example with SQL to Join Arrow Data with DuckDB in Python

- -
- - Jupyter notebook example screenshot with DuckDB and Arrow data handling - - -
- Jupyter notebook example screenshot with DuckDB and Arrow data handling - -
- -
- -

The following example notebook shows how to use SQL to join data from multiple sources using the DuckDB client API within Python. The example includes DuckDB querying a remote CSV, local Parquet file, and Arrow in-memory tables.

- -

Linked Example

- -

Additional Resources

- -

Please see the following the additional resources.

- - -
- - - - - -
- - - -
- - - Previous post
- - Tip of the Week: Diagrams as Code - - -
- - - Next post
- - Tip of the Week: Remove Unused Code to Avoid Software Decay - - -
-
-
- - -
- - - - - - - diff --git a/preview/pr-29/2022/12/12/Remove-Unused-Code-to-Avoid-Decay.html b/preview/pr-29/2022/12/12/Remove-Unused-Code-to-Avoid-Decay.html deleted file mode 100644 index 40de269207..0000000000 --- a/preview/pr-29/2022/12/12/Remove-Unused-Code-to-Avoid-Decay.html +++ /dev/null @@ -1,743 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Tip of the Week: Remove Unused Code to Avoid Software Decay | Software Engineering Team - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - Software Engineering Team - - - CU Dept. of Biomedical Informatics - - - - - - - - -
- -
- - - - - - - - - - - - - -
-

Tip of the Week: Remove Unused Code to Avoid Software Decay

- - - - - - - - -
- - - - - -
- - - -

Tip of the Week: Remove Unused Code to Avoid Software Decay

- -
- - -
- -

Each week we seek to provide a software tip of the week geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out to -#software-engineering on Slack or email DBMISoftwareEngineering at olucdenver.onmicrosoft.com

- -
-
- - - -

The act of creating software often involves many iterations of writing, personal collaborations, and testing. During this process it’s common to lose awareness of code which is no longer used, and thus may not be tested or otherwise linted. Unused code may contribute to “software decay”, the gradual diminishment of code quality or functionality. This post will cover software decay and strategies for addressing unused code to help keep your code quality high.

- - - -

TLDR (too long, didn’t read); -Unused code is easy to amass and may cause your code quality or code functionality to diminish (“decay”) over time. Effort must be taken to maintain any code or artifacts you add to your repositories, including those which are unused. Consider using Vulture, Pylint, or Coverage to help illuminate sections of your code which may need to be removed.

- -

Code Lifecycle and Maintenance

- -
-stateDiagram
-    direction LR
-    removal : removed or archived
-    changes : changes needed
-    [*] --> added
-    added --> maintenance
-    state maintenance {
-      direction LR
-      updated --> changes
-      changes --> updated
-    }
-    maintenance --> removal
-    removal --> [*]
-
- - -

Diagram showing code lifecycle activities.

- -

Adding code to a project involves a loose agreement to maintenance for however long the code is available. The maintenance of the code can involve added efforts in changes as well as passive impacts like longer test durations or decreased readability (simply from more code).

- -
- - - - - - - - - -

When considering multiple parts of code in many files, this maintenance can become untenable, leading to the gradual decay of your code quality or functionality. For example, let’s assume one line of code costs 30 seconds to maintain (feel free to substitute time with monetary or personnel aspects as an example measure here too). 1000 lines of code would cost 500 minutes (or about 8 hours) to maintain. This becomes more complex when considering multiple files, collaborators, or languages.

- -

- -

Think about your project as if it were on a hiking trail: “Carry as little as possible, but choose that little with care.” (Earl Shaffer). Be careful what code you choose to carry; it may impact your ability to address needs over time and lead to otherwise unintended software decay.

- -

Detecting Unused Code with Vulture

- -

Understanding the cost of added content, it’s important to routinely examine which parts of your code are still necessary. You can prepare your code for a long journey by detecting (and removing) unused code with various automated tools. These tools are generally designed for static analysis and linting, meaning they may also be incorporated into automated and routine testing.

- -
$ vulture unused_code_example.py
-unused_code_example.py:3: unused import 'os' (90% confidence)
-unused_code_example.py:4: unused import 'pd' (90% confidence)
-unused_code_example.py:7: unused function 'unused_function' (60% confidence)
-unused_code_example.py:14: unused variable 'unused_var' (60% confidence)
-
- -

Example of Vulture command line usage to discover unused code.

- -

Vulture is one tool dedicated to finding unused python code. Vulture provides both a command line interface and Python API for discovering unused code. It also provide a rough confidence to show how certain it was about whether the block of code was unused. See the following interactive example for a demonstration of using Vulture.

- -

Interactive Example on Unused Code Detection

- -

Further Code Usefulness Detection with Pylint and Coverage.py

- -

In addition to Vulture, Pylint and Coverage.py can be used in a similar way to help show where code may not have been used within your project.

- -

Pylint focuses on code style and other static analysis in addition to unused variables. See Pylint’s Checkers page for more details here, using “unused-*” as a reference to checks it performs which focus on unused code.

- -

Coverage.py helps show you which parts of your code have been executed or not. A common usecase for Coverage involves measuring “test coverage”, or which parts of your code are executed in relationship to tests written for that code. This provides another perspective on code utility: if there’s not a test for the code, is it worth keeping?

- -

Additional Resources

- - -
- - - - - -
- - - -
- - - Previous post
- - Tip of the Week: Data Engineering with SQL, Arrow and DuckDB - - -
- - - Next post
- - Tip of the Week: Linting Documentation as Code - - -
-
-
- - -
- - - - - - - diff --git a/preview/pr-29/2023/01/03/Linting-Documentation-as-Code.html b/preview/pr-29/2023/01/03/Linting-Documentation-as-Code.html deleted file mode 100644 index 18d7625a02..0000000000 --- a/preview/pr-29/2023/01/03/Linting-Documentation-as-Code.html +++ /dev/null @@ -1,772 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Tip of the Week: Linting Documentation as Code | Software Engineering Team - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - Software Engineering Team - - - CU Dept. of Biomedical Informatics - - - - - - - - -
- -
- - - - - - - - - - - - - -
-

Tip of the Week: Linting Documentation as Code

- - - - - - - - -
- - - - - -
- - - -

Tip of the Week: Linting Documentation as Code

- -
- - -
- -

Each week we seek to provide a software tip of the week geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out to -#software-engineering on Slack or email DBMISoftwareEngineering at olucdenver.onmicrosoft.com

- -
-
- - - -

Software documentation is sometimes treated as a less important or secondary aspect of software development. Treating documentation as code allows developers to version control the shared understanding and knowledge surrounding a project. Leveraging this paradigm also enables the use of tools and patterns which have been used to strengthen code maintenance. This article covers one such pattern: linting, or static analysis, for documentation treated like code.

- - - -

TLDR (too long, didn’t read); -There are many linting tools available which enable quick revision of your documentation. Try using codespell for spelling corrections, mdformat for markdown file formatting corrections, and vale for more complex editorial style or natural language assessment within your documentation.

- -

Spelling Checks

- -
- - -
-
<!--- readme.md --->
-## Example Readme
-
-Thsi project is a wokr in progess.
-Code will be updated by the team very often.
-
-(CU Anschutz)[https://www.cuanschutz.edu/]
-
- -

Example readme.md with incorrectly spelled words

-
- - -
-
% codespell readme.md
-readme.md:4: Thsi ==> This
-readme.md:4: wokr ==> work
-readme.md:4: progess ==> progress
-
-
-
-
- -

Example showing codespell detection of mispelled words

-
- -
- -

Spelling checks may be used to automatically detect incorrect spellings of words within your documentation (and code!). Codespell is one library which can lint your word spelling. Codespell may be used through the command-line and also through a pre-commit hook.

- -

Markdown Format Linting

- -
- - -
-
<!--- readme.md --->
-## Example Readme
-
-This project is a work in progress.
-Code will be updated by the team very often.
-
-(CU Anschutz)[https://www.cuanschutz.edu/]
-
- -

Example readme.md with markdown issues

-
- - -
-
% markdownlint readme.md
-readme.md:2 MD041/first-line-heading/first-line-h1
-First line in a file should be a top-level heading
-[Context: "## Example Readme"]
-readme.md:6:5 MD011/no-reversed-links Reversed link
-syntax [(link)[https://www.cuanschutz.edu/]]
-
-
- -

Example showing markdownlint detection of issues

-
- -
- -

The format of your documentation files may also be linted for common issues. This may catch things which are otherwise hard to see when editing content. It may also improve the overall web accessibility of your content, for example, through proper HTML header order and image alternate text. Markdownlint is one library which can be used to find issues within markdown files.

- -

Additional and similar resources to explore in this area:

- - - -

Editorial Style and Grammar

- -
- - -
-
<!--- readme.md --->
-# Example Readme
-
-This project is a work in progress.
-Code will be updated by the team very often.
-
-[CU Anschutz](https://www.cuanschutz.edu/)
-
- -

Example readme.md with questionable editorial style

-
- - -
-
% vale readme-example.md
-readme-example.md
-2:12  error    Did you really mean 'Readme'?   Vale.Spelling
-5:11  warning  'be updated' may be passive     write-good.Passive
-               voice. Use active voice if you
-               can.
-5:34  warning  'very' is a weasel word!        write-good.Weasel
-
- -

Example showing vale warnings and errors

-
- -
- -

Maintaining consistent editorial style and grammar may also be a focus within your documentation. These issues are sometimes more difficult to detect and more opinionated in nature. In some cases, organizations publish guides on this topic (see Microsoft Writing Style Guide, or Google Developer Documenation Style Guide). Some of the complexity of writing style may be linted through tools like Vale. Using common configurations through Vale can unify how language is used within your documentation by linting for common style and grammar.

- -

Additional and similar resources to explore in this area:

- - - -

Resources

- -

Please see the following the resources on this topic.

- - -
- - - - - -
- - - -
- - - Previous post
- - Tip of the Week: Remove Unused Code to Avoid Software Decay - - -
- - - Next post
- - Tip of the Week: Timebox Your Software Work - - -
-
-
- - -
- - - - - - - diff --git a/preview/pr-29/2023/01/17/Timebox-Your-Software-Work.html b/preview/pr-29/2023/01/17/Timebox-Your-Software-Work.html deleted file mode 100644 index 559fe67f05..0000000000 --- a/preview/pr-29/2023/01/17/Timebox-Your-Software-Work.html +++ /dev/null @@ -1,735 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Tip of the Week: Timebox Your Software Work | Software Engineering Team - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - Software Engineering Team - - - CU Dept. of Biomedical Informatics - - - - - - - - -
- -
- - - - - - - - - - - - - -
-

Tip of the Week: Timebox Your Software Work

- - - - - - - - -
- - - - - -
- - - -

Tip of the Week: Timebox Your Software Work

- -
- - -
- -

Each week we seek to provide a software tip of the week geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out to -#software-engineering on Slack or email DBMISoftwareEngineering at olucdenver.onmicrosoft.com

- -
-
- - - -

Programming often involves long periods of problem solving which can sometimes lead to unproductive or exhausting outcomes. This article covers one way to avoid less productive time expense or protect yourself from overexhaustion through a technique called “timeboxing” (also sometimes referenced as “timeblocking”).

- - - -

TLDR (too long, didn’t read); -Use timeboxing techniques such as Pomodoro® or 52/17 to help modularize your software work to ensure you don’t fall victim to Parkinson’s Law. Timeboxing may also map well to Github Issues, which allows your software tasks to be further aligned, documented, and chunked in collaboration with others.

- -

Controlling Work Time Expansion

- -
- - Image depicting work as a creature with a timebox around it. - - -
- Image depicting work as a creature with a timebox around it. - -
- -
- -

Have you ever spent more time than you thought you would on a task? An adage which helps explain this phenomenon is Parkinson’s Law:

- -
-

“… work expands so as to fill the time available for its completion.”

-
- -

The practice of writing software is not protected from this “law”. It may be affected by it in sometimes worse ways during long periods of uninterrupted programming where we may have an inclination to forget productive goals.

- -

One way to address this is through the use of timeboxing techiques. Timeboxing sets a fixed limit to the amount of time one may spend on a specific activity. One can use timeboxing to systematically address many tasks, for example, as with the Pomodoro® Technique (developed by Francesco Cirillo) or 52/17 rule. While there are many ways to apply timeboxing, make sure to balance activity with short breaks and focus switches to help ensure we don’t become overwhelmed.

- -

Timeboxing Means Modularization

- -

Timeboxing has an auxiliary benefit of framing your work as objective and oftentimes smaller chunks (we have to know what we’re timeboxing in order to use this technique). Creating distinct chunks of work applies for both our daily time schedule as well as code itself. This concept is more broadly called “modularization” and helps to distinguish large portions of work (whether in real life or in code) as smaller and more maintainable chunks.

- -
- - -
-
# Goals
-- Finish writing paper
-
-
-
-
-
- -

Vague and possibly large task

- -
- - -
-
# Goals
-- Finish writing paper
-  - Create paper outline
-  - Finish writing introduction
-  - Check for dead hyperlinks
-  - Request internal review
-
- -

Modular and more understandable tasks

-
- -
- -

Breaking down large amounts of work as smaller chunks within our code helps to ensure long-term maintainability and understandability. Similarly, keeping our tasks small can help ensure our goals are achievable and understandable (to ourselves or others). Without this modularity, tasks can be impossible to achieve (subjective in nature) or very difficult to understand. Stated differently, taking many small steps can lead to a big change in an organized, oftentimes less exhausting way (related graphic).

- -

Version Control and Timeboxing

- -
# Repo Issues
-- "Prevent foo warning" - 20 minutes
-- "Remove bar feature" - 20 minutes
-- "Address baz error" - 20 minutes
-
-
- -

List of example version control repository issues with associated time duration.

- -

The parallels between the time we give a task and related code can work towards your benefit. For example, Github Issues can be created to outline a timeboxed task which relates to a distinct chunk of code to be created, updated, or fixed. Once development tasks have been outlined as issues, a developer can use timeboxing to help organize how much time to allocate on each issue.

- -

Using Github Issues in this way provides a way to observe task progress associated with one or many repositories. It also increases collaborative opportunities for task sizing and description. For example, if a task looks too large to complete in a reasonable amount of time, developers may work together to break the task down into smaller modules of work.

- -

Be Kind to Yourself: Take Breaks

- -

While timeboxing is often a conversation about how to be more productive, it’s also worth remembering: take breaks to be kind to yourself and more effective. Some studies and thought leadership have shown that taking breaks may be necessary to avoid performance decreases and impacts to your health. There’s also some indication that taking breaks may lead to better work. See below for just a few examples:

- - - -

Additional Resources

- - -
- - - - - -
- - - -
- - - Previous post
- - Tip of the Week: Linting Documentation as Code - - -
- - - Next post
- - Tip of the Week: Software Linting with R - - -
-
-
- - -
- - - - - - - diff --git a/preview/pr-29/2023/01/30/Software-Linting-with-R.html b/preview/pr-29/2023/01/30/Software-Linting-with-R.html deleted file mode 100644 index 2e67b99fe0..0000000000 --- a/preview/pr-29/2023/01/30/Software-Linting-with-R.html +++ /dev/null @@ -1,724 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Tip of the Week: Software Linting with R | Software Engineering Team - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - Software Engineering Team - - - CU Dept. of Biomedical Informatics - - - - - - - - -
- -
- - - - - - - - - - - - - -
-

Tip of the Week: Software Linting with R

- - - - - - - - -
- - - - - -
- - - -

Tip of the Week: Software Linting with R

- -
- - -
- -

Each week we seek to provide a software tip of the week geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out to -#software-engineering on Slack or email DBMISoftwareEngineering at olucdenver.onmicrosoft.com

- -
-
- - - -

This article covers using the software technique of linting on R code in order to improve code quality, development velocity, and collaboration.

- - - -

TLDR (too long, didn’t read); -Use software linting (static analysis) practices on your R code with existing packages lintr and styler (among others). These linters may be applied using pre-commit in your local development environment or as continuous tests using for example Github Actions.

- -

Treating R as Software

- -
-

“Many users think of R as a statistics system. We prefer to think of it as an environment within which statistical techniques are implemented.”

-
- -

(R-Project: What is R?)

- -

The R programming language is sometimes treated as only a statistics system instead of software. This treatment can sometimes lead to common issues in development which are experienced in other languages. Addressing R as software enables developers to enhance their work by taking benefit from existing concepts applied to many other languages.

- -

Linting with R

- -
-flowchart LR
-  write\[Write R code] --> |check| check\[Check code with linters]
-  check --> |revise| write
-
- - -

Workflow loop depicting writing R code and revising with linters.

- -

Software linting, or static analysis, is one way to ensure a minimum level of code quality without writing new tests. Linting checks how your code is structured without running it to make sure it abides by common language paradigms and logical structures. Using linting tools allows a developer to gain quick insights about their code before it is viewed or used by others.

- -

One way to lint your R code is by using the lintr package. The lintr package is also complementary of the styler pacakge, which formats the syntax of R code in a consistent way. Both of these can be used independently or as part of continuous quality checks for R code repositories.

- -

Automated Linting Checks with R

- -
-flowchart LR
-  subgraph development
-    write
-    check
-  end
-  subgraph linters
-    direction LR
-    lintr
-    styler
-  end
-  check <-.- linters
-  write\[Write R code] --> |check| check\[Check code with pre-commit]
-  check --> |revise| write
-
- -

Workflow showing development with pre-commit using multiple linters.

- -

lintr and styler can be incorporated into automated checks to help make sure linting (or other steps) are always used with new code. One tool which can help with this is pre-commit, which acts as both a local development tool in addition to providing observability within source control (more on this later).

- -

Using pre-commit locally enables quick feedback loops using one or many checkers (such as lintr, styler, or others). Pre-commit may be used through the use of git hooks or manually using pre-commit run ... from a command-line. See this example of pre-commit checks with R for an example of multiple pre-commit checks for R code.

- -

Continuous and Observable Testing for R

- -
-flowchart LR
-  subgraph development [local development]
-    direction LR
-    write
-    check
-    commit
-  end
-  subgraph remote[Github repository]
-    direction LR
-    action["Check code (remotely)"]
-  end
-  write\[Write R code] --> |check| check\[Check code with pre-commit]
-  check --> |revise| write
-  check --> commit[commit + push]
-  commit --> |optional trigger| action
-  check -.-> |perform same checks| action
-
- -

Workflow showing pre-commit used as continuous testing tool with Github.

- -

Pre-commit linting checks can also be incorporated into continuous testing performed on your repository. One way to do this is using Github Actions. Github Actions provides a programmatic way to specify automatic steps taken as changes occur to a repository.

- -

Pre-commit provides an example Github Action which will automatically check and alert repository maintainers when code challenges are detected. Using pre-commit in this way allows R developers to ensure lintr checks are performed on any new work checked into a repository. This can have benefits towards decreasing pull request (PR) review time and standardize how code collaboration takes place for R developers.

- -

Resources

- -

Please see the following the resources on this topic.

- - -
- - - - - -
- - - -
- - - Previous post
- - Tip of the Week: Timebox Your Software Work - - -
- - - Next post
- - Tip of the Week: Branch, Review, and Learn - - -
-
-
- - -
- - - - - - - diff --git a/preview/pr-29/2023/02/13/Branch-Review-and-Learn.html b/preview/pr-29/2023/02/13/Branch-Review-and-Learn.html deleted file mode 100644 index ba3431135d..0000000000 --- a/preview/pr-29/2023/02/13/Branch-Review-and-Learn.html +++ /dev/null @@ -1,773 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Tip of the Week: Branch, Review, and Learn | Software Engineering Team - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - Software Engineering Team - - - CU Dept. of Biomedical Informatics - - - - - - - - -
- -
- - - - - - - - - - - - - -
-

Tip of the Week: Branch, Review, and Learn

- - - - - - - - -
- - - - - -
- - - -

Tip of the Week: Branch, Review, and Learn

- -
- - -
- -

Each week we seek to provide a software tip of the week geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out to -#software-engineering on Slack or email DBMISoftwareEngineering at olucdenver.onmicrosoft.com

- -
-
- - - -

Git provides a feature called branching which facilitates parallel and segmented programming work through commits with version control. Using branching enables both work concurrency (multiple people working on the same repository at the same time) as well as a chance to isolate and review specific programming tasks. This article covers some conceptual best practices with branching, reviewing, and merging code using Github.

- - - -

Please note: the content below represents one opinion in a larger space of Git workflow concepts (it’s not perfect!). Developer cultures may vary on these topics; be sure to acknowledge people and culture over exclusive or absolute dedication to what is found below.

- -

TLDR (too long, didn’t read); -Use git branching techniques to segment the completion of programming tasks, gradually and consistently committing small changes (practicing festina lente or “make haste, slowly”). When a group of small changes are ready from branches, request pull request reviews and take advantage of comments to continuously improve the work. Prepare for a branch merge after review by deciding which merge strategy is appropriate and automating merge requirements with branch protection rules.

- -

Concept: Coursework Branching

- -
-flowchart LR
- subgraph Course
-    direction LR
-    open["open\nassignment"]
-    turn_in["review\nassignment"]
-  end
-  subgraph Student ["     Student"]
-    direction LR
-    work["completed\nassignment"]
-  end
-  open -.-> turn_in
-  open --> |works towards| work
-  work --> |seeks review| turn\_in
-
- - -

An example course and student assignment workflow.

- -

Git branching practices may be understood in context with similar workflows from real life. Consider a student taking a course, where an assignment is given to them to complete. In addition to the steps shown in the diagram above, it’s important to think about why this pattern is beneficial:

- - - -

Branching to Complete an “Assignment”

- -
-%%{init: { 'logLevel': 'debug', 'theme': 'default' , 'themeVariables': {
-      'git0': '#4F46E5',
-      'git1': '#10B981',
-      'gitBranchLabel1': '#ffffff'
-} } }%%
-    gitGraph
-       commit id: "..."
-       commit id: "opened"
-       branch assignment
-       checkout assignment
-       commit id: "completed"
-       checkout main
-
- -

An example git diagram showing assignment branch based off main.

- -

Following the course assignment workflow, the diagram above shows an in-progress assignment branch based off of the main branch. When the assignment branch is created, we bring into it everything we know from main (the course) so far in the form of commits, or groups of changes to various files. Branching allows us to make consistent and well described changes based on what’s already happened without impacting others work in the meantime.

- -
-

Branching best practices:

- -
    -
  • -Keep the name and work with branches dedicated to a specific and focused purpose. For example: a branch named fix-links-in-docs might entail work related to fixing HTTP links within documentation.
  • -
  • -Consider the use of Github Forks (along with branches within the fork) to help further isolate and enrich work potential. Forks also allow remixing existing work into new possibilities.
  • -
  • -festina lente or “make haste, slowly”: Commits on any branch represent small chunks of a cohesive idea which will eventually be brought to main. It is often beneficial to be consistent with small, gradual commits to avoid a rushed or incomplete submission. The same applies more generally for software; taking time upfront to do things well can mean time saved later.
  • -
-
- -

Reviewing the Branched Work

- -
-%%{init: { 'logLevel': 'debug', 'theme': 'default' , 'themeVariables': {
-      'git0': '#6366F1',
-      'git1': '#10B981',
-      'gitBranchLabel1': '#ffffff'
-} } }%%
-    gitGraph
-       commit id: "..."
-       commit id: "opened"
-       branch assignment
-       checkout assignment
-       commit id: "completed"
-       checkout main
-       merge assignment id: "reviewed"
-
- -

An example git diagram showing assignment branch being merged with main after a review.

- -

The diagram above depicts a merge from the assignment branch to pull the changes into the main branch, simulating an assignment being returned for review within a course. While merges may be forced without review, it’s a best practice create a Pull Request (PR) Review (also known as a Merge Request (MR) on some systems) and then ask other members of your team to review it. Doing this provides a chance to make revisions before code changes are “finalized” within the main branch.

- -
-

Github provides special tools for reviews which can assist both the author and reviewer:

- -
    -
  • -Keep code changes intended for review small, enabling reviewers to reason through the work to more quickly provide feedback and practicing incremental continuous improvement (it may be difficult to address everything at once!). This also may denote the git history for a repository in a clearer way.
  • -
  • -Github comments: Overall review comments (encompassing all work from the branch) and Inline comments (inquiring about individual lines of code) may be provided. Inline comments may also include code suggestions, which allows for code-based revision suggestions that may be committed directly to the branch using markdown codeblocks ( ``suggestion `).
  • -
  • -Github issues: Creating issues from comments allows the creation of new repository issues to address topics outside of the current PR.
  • -
-
- -

Merging the Branch after Review

- -
-%%{init: { 'logLevel': 'debug', 'theme': 'default' , 'themeVariables': {
-      'git0': '#6366F1'
-} } }%%
-    gitGraph
-       commit id: "..."
-       commit id: "opened"
-       commit type: HIGHLIGHT id: "reviewed"
-       commit id: "...."
-
- -

An example git diagram showing the main branch after the assignment branch has been merged (and removed).

- -

Changes may be made within the assignment branch until the work is in a state where the authors and reviewers are satisfied. At this point, the branch changes may be merged into main. Approvals are sometimes provided informally (for ex., with a comment: “LGTM (looks good to me)!”) or explicitly (for ex., approvals within Github) to indicate or enable branch merge readiness . After the merge, changes may continue to be made in a similar way (perhaps accounting for concurrently branched work elsewhere). Generally, a merged branch may be removed afterwards to help maintain an organized working environment (see Github PR branch removal).

- -
-

Github provides special tools for merging:

- -
    -
  • -Decide which merge strategy is appropriate (there are many!): There are many merge strategies within Github (merge commits, squash merges, and rebase merging). Take time to understand them and choose which one works best.
  • -
  • -Consider using branch protection to automate merge requirements: The main or other branches may be “protected” against merges using branch protection rules. These rules can require reviewer approvals or automatic status checks to pass before changes may be merged.
  • -
  • -Use merge queuing to manage multiple PR’s: When there are many unmerged PR’s, it can sometimes be difficult to document and ensure each are merged in a desired sequence. Consider using merge queues to help with this process.
  • -
-
- -

Additional Resources

- -

The links below may provide additional guidance on using these git features, including in-depth coverage of various features and related configuration.

- - -
- - - - - -
- - - -
- - - Previous post
- - Tip of the Week: Software Linting with R - - -
- - - Next post
- - Tip of the Week: Automate Software Workflows with GitHub Actions - - -
-
-
- - -
- - - - - - - diff --git a/preview/pr-29/2023/03/15/Automate-Software-Workflows-with-Github-Actions.html b/preview/pr-29/2023/03/15/Automate-Software-Workflows-with-Github-Actions.html deleted file mode 100644 index 75bfc9db2f..0000000000 --- a/preview/pr-29/2023/03/15/Automate-Software-Workflows-with-Github-Actions.html +++ /dev/null @@ -1,777 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Tip of the Week: Automate Software Workflows with GitHub Actions | Software Engineering Team - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - Software Engineering Team - - - CU Dept. of Biomedical Informatics - - - - - - - - -
- -
- - - - - - - - - - - - - -
-

Tip of the Week: Automate Software Workflows with GitHub Actions

- - - - - - - - -
- - - - - -
- - - -

Tip of the Week: Automate Software Workflows with GitHub Actions

- -
- - -
- -

Each week we seek to provide a software tip of the week geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out to -#software-engineering on Slack or email DBMISoftwareEngineering at olucdenver.onmicrosoft.com

- -
-
- - - -

There are many routine tasks which can be automated to help save time and increase reproducibility in software development. GitHub Actions provides one way to accomplish these tasks using code-based workflows and related workflow implementations. This type of automation is commonly used to perform tests, builds (preparing for the delivery of the code), or delivery itself (sending the code or related artifacts where they will be used).

- - - -

TLDR (too long, didn’t read); -Use GitHub Actions to perform continuous integration work automatically by leveraging Github’s workflow specification and the existing marketplace of already-created Actions. You can test these workflows with Act, which can enhance development with this feature of Github. Consider making use of “write once, run anywhere” (WORA) and Dagger in conjunction with GitHub Actions to enable reproducible workflows for your software projects.

- -

Workflows in Software

- -
-flowchart LR
-  start((start)) --> action
-  action["action(s)"] --> en((end))
-  style start fill:#6EE7B7
-  style en fill:#FCA5A5
-
- - -

An example workflow.

- -

Workflows consist of sequenced activities used by various systems. Software development workflows help accomplish work the same way each time by using what are commonly called “workflow engines”. Generally, workflow engines are provided code which indicate beginnings (what triggers a workflow to begin), actions (work being performed in sequence), and an ending (where the workflow stops). There are many workflow engines, including some which help accomplish work alongside version control.

- -

GitHub Actions

- -
-flowchart LR
-  subgraph workflow [GitHub Actions Workflow Run]
-    direction LR
-    action["action(s)"] --> en((end))
-    start((event\ntrigger))
-  end
-  start --> action
-  style start fill:#6EE7B7
-  style en fill:#FCA5A5
-
- -

A diagram showing GitHub Actions as a workflow.

- -

GitHub Actions is a feature of GitHub which allows you to run workflows in relation to your code as a continuous integration (including automated testing, builds, and deployments) and general automation tool. For example, one can use GitHub Actions to make sure code related to a GitHub Pull Request passes certain tests before it is allowed to be merged. GitHub Actions may be specified using YAML files within your repository’s .github/workflows directory by using syntax specific to Github’s workflow specification. Each YAML file under the .github/workflows directory can specify workflows to accomplish tasks related to your software work. GitHub Actions workflows may be customized to your own needs, or use an existing marketplace of already-created Actions.

- -
- - Image showing GitHub Actions tab on GitHub website. - - -
- Image showing GitHub Actions tab on GitHub website. - -
- -
- -

GitHub provides an “Actions” tab for each repository which helps visualize and control Github Actions workflow runs. This tab shows a history of all workflow runs in the repository. For each run, it shows whether it was run successful or not, the associated logs, and controls to cancel or re-run it.

- -
-

GitHub Actions Examples -GitHub Actions is sometimes better understood with examples. See the following references for a few basic examples of using GitHub Actions in a simulated project repository.

- - -
- -

Testing with Act

- -
-flowchart LR
-  subgraph container ["local simulation container(s)"]
-    direction LR
-    subgraph workflow [GitHub Actions Workflow Run]
-      direction LR
-      start((event\ntrigger))
-      action --> en((end))
-    end
-  end
-  start --> action
-  act\[Run Act] -.-> |Simulate\ntrigger| start
-  style start fill:#6EE7B7
-  style en fill:#FCA5A5
-
- -

A diagram showing how GitHub Actions workflows may be triggered from Act

- -

One challenge with GitHub Actions is a lack of standardized local testing tools. For example, how will you know that a new GitHub Actions workflow will function as expected (or at all) without pushing to the GitHub repository? One third-party tool which can help with this is Act. Act uses Docker images which require Docker Desktop to simulate what running a GitHub Action workflow within your local environment. Using Act can sometimes avoid guessing what will occur when a GitHub Action worklow is added to your repository. See Act’s installation documentation for more information on getting started with this tool.

- -

Nested Workflows with GitHub Actions

- -
-flowchart LR
-
-  subgraph action ["Nested Workflow (Dagger, etc)"]
-    direction LR
-    actions
-    start2((start)) --> actions
-    actions --> en2((end))
-    en2((end))
-  end
-  subgraph workflow2 [Local Environment Run]
-    direction LR
-    run2[run workflow]
-    en3((end))
-    start3((event\ntrigger))
-  end
-  subgraph workflow [GitHub Actions Workflow Run]
-    direction LR
-    start((event\ntrigger))
-    run[run workflow]
-    en((end))
-  end
-  
-  start --> run
-  start3 --> run2
-  action -.-> run
-  run --> en
-  run2 --> en3
-  action -.-> run2
-  style start fill:#6EE7B7
-  style start2 fill:#D1FAE5
-  style start3 fill:#6EE7B7
-  style en fill:#FCA5A5
-  style en2 fill:#FFE4E6
-  style en3 fill:#FCA5A5
-
- -

A diagram showing how GitHub Actions may leverage nested workflows with tools like Dagger.

- -

There are times when GitHub Actions may be too constricting or Act may not accurately simulate workflows. We also might seek to “write once, run anywhere” (WORA) to enable flexible development on many environments. One workaround to this challenge is to use nested workflows which are compatible with local environments and GitHub Actions environments. Dagger is one tool which enables programmatically specifying and using workflows this way. Using Dagger allows you to trigger workflows on your local machine or GitHub Actions with the same underlying engine, meaning there are fewer inconsistencies or guesswork for developers (see here for an explanation of how Dagger works).

- -

There are also other alternatives to Dagger you may want to consider based on your usecase, preference, or interest. Earthly is similar to Dagger and uses “earthfiles” as a specification. Both Dagger and Earthly (in addition to GitHub Actions) use container-based approaches, which in-and-of themselves present additional alternatives outside the scope of this article.

- -
-

GitHub Actions with Nested Workflow Example -Reference this example for a brief demonstration of how GitHub Actions and Dagger may be used together.

- - -
- -

Closing Remarks

- -

Using GitHub Actions through the above methods can help automate your technical work and increase the quality of your code with sometimes very little additional effort. Saving time through this form of automation can provide additional flexibility accomplish more complex work which requires your attention (perhaps using timeboxing techniques). Even small amounts of time saved can turn into large opportunities for other work. On this note, be sure to explore how GitHub Actions can improve things for your software endeavors.

-
- - - - - -
- - - -
- - - Previous post
- - Tip of the Week: Branch, Review, and Learn - - -
- - - Next post
- - Tip of the Week: Using Python and Anaconda with the Alpine HPC Cluster - - -
-
-
- - -
- - - - - - - diff --git a/preview/pr-29/2023/07/07/Using-Python-and-Anaconda-with-the-Alpine-HPC-Cluster.html b/preview/pr-29/2023/07/07/Using-Python-and-Anaconda-with-the-Alpine-HPC-Cluster.html deleted file mode 100644 index ddd196754c..0000000000 --- a/preview/pr-29/2023/07/07/Using-Python-and-Anaconda-with-the-Alpine-HPC-Cluster.html +++ /dev/null @@ -1,933 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Tip of the Week: Using Python and Anaconda with the Alpine HPC Cluster | Software Engineering Team - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - Software Engineering Team - - - CU Dept. of Biomedical Informatics - - - - - - - - -
- -
- - - - - - - - - - - - - -
-

Tip of the Week: Using Python and Anaconda with the Alpine HPC Cluster

- - - - - - - - -
- - - - - -
- - - -

Tip of the Week: Using Python and Anaconda with the Alpine HPC Cluster

- -
- - -
- -

Each week we seek to provide a software tip of the week geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out to -#software-engineering on Slack or email DBMISoftwareEngineering at olucdenver.onmicrosoft.com

- -
-
- - - -

This post is intended to help demonstrate the use of Python on Alpine, a High Performance Compute (HPC) cluster hosted by the University of Colorado Boulder’s Research Computing. -We use Python here by way of Anaconda environment management to run code on Alpine. -This readme will cover a background on the technologies and how to use the contents of an example project repository as though it were a project you were working on and wanting to run on Alpine.

- - - -

- -

Diagram showing a repository’s work as being processed on Alpine.

- -

Table of Contents

- -
    -
  1. -Background: here we cover the background of Alpine and related technologies.
  2. -
  3. -Implementation: in this section we use the contents of an example project repository on Alpine.
  4. -
- -

Background

- -

Why would I use Alpine?

- -

- -

Diagram showing common benefits of Alpine and HPC clusters.

- -

Alpine is a High Performance Compute (HPC) cluster. -HPC environments provide shared computer hardware resources like memory, CPU, GPU or others to run performance-intensive work. -Reasons for using Alpine might include:

- - - -

How does Alpine work?

- -

- -

Diagram showing high-level user workflow and Alpine components.

- -

Alpine’s compute resources are used through compute nodes in a system called Slurm. -Slurm is a system that a large number of users to run jobs on a cluster of computers; the system figures out how to use all the computers in the cluster to execute all the user’s jobs fairly (i.e., giving each user approximately equal time and resources on the cluster). A job is a request to run something, e.g. a bash script or a program, along with specifications about how much RAM and CPU it needs, how long it can run, and how it should be executed.

- -

Slurm’s role in general is to take in a job (submitted via the sbatch command) and put it into a queue (also called a “partition” in Slurm). For each job in the queue, Slurm constantly tries to find a computer in the cluster with enough resources to run that job, then when an available computer is found runs the program the job specifies on that computer. As the program runs, Slurm records its output to files and finally reports the program’s exit status (either completed or failed) back to the job manager.

- -

Importantly, jobs can either be marked as interactive or batch. When you submit an interactive job, sbatch will pause while waiting for the job to start and then connect you to the program, so you can see its output and enter commands in real time. On the other hand, a batch job will return immediately; you can see the progress of your job using squeue, and you can typically see the output of the job in the folder from which you ran sbatch unless you specify otherwise. -Data for or from Slurm work may be stored temporarily on local storage or on user-specific external (remote) storage.

- -
- - -
- -

Wait, what are “nodes”?

- -

A simplified way to understand the architecture of Slurm on Alpine is through login and compute “nodes” (computers). -Login nodes act as a place to prepare and submit jobs which will be completed on compute nodes. Login nodes are never used to execute Slurm jobs, whereas compute nodes are exclusively accessed via a job. -Login nodes have limited resource access and are not recommended for running procedures.

- -
-
- -

One can interact with Slurm on Alpine by use of Slurm interfaces and directives. -A quick way of accessing Alpine resources is through the use of the acompile command, which starts an interactive job on a compute node with some typical default parameters for the job. Since acompile requests very modest resources (1 hour and 1 CPU core at the time of writing), you’ll typically quickly be connected to a compute node. For more intensive or long-lived interactive jobs, consider using sinteractive, which allows for more customization: Interactive Jobs. -One can also access Slurm directly through various commands on Alpine.

- -

Many common software packages are available through the Modules package on Alpine (UCB RC documentation: The Modules System).

- -

How does Slurm work?

- -

- -

Diagram showing how Slurm generally works.

- -

Using Alpine effectively involves knowing how to leverage Slurm. -A simplified way to understand how Slurm works is through the following sequence. -Please note that some steps and additional complexity are omitted for the purposes of providing a basis of understanding.

- -
    -
  1. -Create a job script: build a script which will configure and run procedures related to the work you seek to accomplish on the HPC cluster.
  2. -
  3. -Submit job to Slurm: ask Slurm to run a set of commands or procedures.
  4. -
  5. -Job queue: Slurm will queue the submitted job alongside others (recall that the HPC cluster is a shared resource), providing information about progress as time goes on.
  6. -
  7. -Job processing: Slurm will run the procedures in the job script as scheduled.
  8. -
  9. -Job completion or cancellation: submitted jobs eventually may reach completion or cancellation states with saved information inside Slurm regarding what happened.
  10. -
- -

How do I store data on Alpine?

- -

- -

Data used or produced by your processed jobs on Alpine may use a number of different data storage locations. -Be sure to follow the Acceptable data storage and use policies of Alpine, avoiding the use of certain sensitive information and other items. -These may be distinguished in two ways:

- -
    -
  1. -

    Alpine local storage (sometimes temporary): Alpine provides a number of temporary data storage locations for accomplishing your work. -⚠️ Note: some of these locations may be periodically purged and are not a suitable location for long-term data hosting (see here for more information)!
    -Storage locations available (see this link for full descriptions):

    - -
      -
    • -Home filesystem: 2 GB of backed up space under /home/$USER (where $USER is your RMACC or Alpine username).
    • -
    • -Projects filesystem: 250 GB of backed up space under /projects/$USER (where $USER is your RMACC or Alpine username).
    • -
    • -Scratch filesystem: 10 TB (10,240 GB) of space which is not backed up under /scratch/alpine/$USER (where $USER is your RMACC or Alpine username).
    • -
    -
  2. -
  3. -

    External / remote storage: Users are encouraged to explore external data storage options for long-term hosting.
    -Examples may include the following:

    - - -
  4. -
- -

How do I send or receive data on Alpine?

- -

- -

Diagram showing external data storage being used to send or receive data on Alpine local storage.

- -

Data may be sent to or gathered from Alpine using a number of different methods. -These may vary contingent on the external data storage being referenced, the code involved, or your group’s available resources. -Please reference the following documentation from the University of Colorado Boulder’s Research Computing regarding data transfers: The Compute Environment - Data Transfer. -Please note: due to the authentication configuration of Alpine many local or SSH-key based methods are not available for CU Anschutz users. -As a result, Globus represents one of the best options available (see 3. 📂 Transfer data results below). While the Globus tutorial in this document describes how you can download data from Alpine to your computer, note that you can also use Globus to transfer data to Alpine from your computer.

- -

Implementation

- -

- -

Diagram showing how an example project repository may be used within Alpine through primary steps and processing workflow.

- -

Use the following steps to understand how Alpine may be used with an example project repository to run example Python code.

- -

0. 🔑 Gain Alpine access

- -

First you will need to gain access to Alpine. -This access is provided to members of the University of Colorado Anschutz through RMACC and is separate from other credentials which may be provided by default in your role. -Please see the following guide from the University of Colorado Boulder’s Research Computing covering requesting access and generally how this works for members of the University of Colorado Anschutz.

- - - -

1. 🛠️ Prepare code on Alpine

- -
[username@xsede.org@login-ciX ~]$ cd /projects/$USER
-[username@xsede.org@login-ciX username@xsede.org]$ git clone https://github.com/CU-DBMI/example-hpc-alpine-python
-Cloning into 'example-hpc-alpine-python'...
-... git output ...
-[username@xsede.org@login-ciX username@xsede.org]$ ls -l example-hpc-alpine-python
-... ls output ...
-
- -

An example of what this preparation section might look like in your Alpine terminal session.

- -

Next we will prepare our code within Alpine. -We do this to balance the fact that we may develop and source control code outside of Alpine. -In the case of this example work, we assume git as an interface for GitHub as the source control host.

- -

Below you’ll find the general steps associated with this process.

- -
    -
  1. Login to the Alpine command line (reference this guide).
  2. -
  3. Change directory into the Projects filesystem (generally we’ll assume processed data produced by this code are large enough to warrant the need for additional space):
    cd /projects/$USER -
  4. -
  5. Use git (built into Alpine by default) commands to clone this repo:
    git clone https://github.com/CU-DBMI/example-hpc-alpine-python -
  6. -
  7. Verify the contents were received as desired (this should show the contents of an example project repository):
    ls -l example-hpc-alpine-python -
  8. -
- - - -

- -
- - -
- -

What if I need to authenticate with GitHub?

- -

There are times where you may need to authenticate with GitHub in order to accomplish your work. -From a GitHub perspective, you will want to use either GitHub Personal Access Tokens (PAT) (recommended by GitHub) or SSH keys associated with the git client on Alpine. -Note: if you are prompted for a username and password from git when accessing a GitHub resource, the password is now associated with other keys like PAT’s instead of your user’s password (reference). -See the following guide from GitHub for more information on how authentication through git to GitHub works:

- - - -
-
- -

2. ⚙️ Implement code on Alpine

- -
[username@xsede.org@login-ciX ~]$ sbatch --export=CSV_FILEPATH="/projects/$USER/example_data.csv" example-hpc-alpine-python/run_script.sh
-[username@xsede.org@login-ciX username@xsede.org]$ tail -f example-hpc-alpine-python.out
-... tail output (ctrl/cmd + c to cancel) ...
-[username@xsede.org@login-ciX username@xsede.org]$ head -n 2 example_data.csvexample-hpc-alpine-python
-... data output ...
-
- -

An example of what this implementation section might look like in your Alpine terminal session.

- -

After our code is available on Alpine we’re ready to run it using Slurm and related resources. -We use Anaconda to build a Python environment with specified packages for reproducibility. -The main goal of the Python code related to this work is to create a CSV file with random data at a specified location. -We’ll use Slurm’s sbatch command, which submits batch scripts to Slurm using various options.

- -
    -
  1. Use the sbatch command with exported variable CSV_FILEPATH.
    sbatch --export=CSV_FILEPATH="/projects/$USER/example_data.csv" example-hpc-alpine-python/run_script.sh -
  2. -
  3. After a short moment, use the tail command to observe the log file created by Slurm for this sbatch submission. This file can help you understand where things are at and if anything went wrong.
    tail -f example-hpc-alpine-python.out -
  4. -
  5. Once you see that the work has completed from the log file, take a look at the top 2 lines of the data file using the head command to verify the data arrived as expected (column names with random values):
    head -n 2 example_data.csv -
  6. -
- -

3. 📂 Transfer data results

- -

- -

Diagram showing how example_data.csv may be transferred from Alpine to a local machine using Globus solutions.

- -

Now that the example data output from the Slurm work is available we need to transfer that data to a local system for further use. -In this example we’ll use Globus as a data transfer method from Alpine to our local machine. -Please note: always be sure to check data privacy and policy which change the methods or storage locations you may use for your data!

- -
    -
  1. -Globus local machine configuration -
      -
    1. Install Globus Connect Personal on your local machine.
    2. -
    3. During installation, you will be prompted to login to Globus. Use your ACCESS credentials to login.
    4. -
    5. During installation login, note the label you provide to Globus. This will be used later, referenced as “Globus Connect Personal label”.
    6. -
    7. Ensure you add and (importantly:) provide write access to a local directory via Globus Connect Personal - Preferences - Access where you’d like the data to be received from Alpine to your local machine.

      -
    8. -
    -
  2. -
  3. -Globus web interface -
      -
    1. Use your ACCESS credentials to login to the Globus web interface.
    2. -
    3. -Configure File Manager left side (source selection) -
        -
      1. Within the Globus web interface on the File Manager tab, use the Collection input box to search or select “CU Boulder Research Computing ACCESS”.
      2. -
      3. Within the Globus web interface on the File Manager tab, use the Path input box to enter: /projects/your_username_here/ (replacing “your_username_here” with your username from Alpine, including the “@” symbol if it applies).
      4. -
      -
    4. -
    5. -Configure File Manager right side (destination selection) -
        -
      1. Within the Globus web interface on the File Manager tab, use the Collection input box to search or select the __Globus Connect Personal label you provided in earlier steps.
      2. -
      3. Within the Globus web interface on the File Manager tab, use the Path input box to enter the local path which you made accessible in earlier steps.
      4. -
      -
    6. -
    7. -Begin Globus transfer -
        -
      1. Within the Globus web interface on the File Manager tab on the left side (source selection), check the box next to the file example_data.csv.
      2. -
      3. Within the Globus web interface on the File Manager tab on the left side (source selection), click the “Start ▶️” button to begin the transfer from Alpine to your local directory.
      4. -
      5. After clicking the “Start ▶️” button, you may see a message in the top right with the message “Transfer request submitted successfully”. You can click the link to view the details associated with the transfer.
      6. -
      7. After a short period, the file will be transferred and you should be able to verify the contents on your local machine.
      8. -
      -
    8. -
    -
  4. -
- -

Further References

- - -
- - - - - -
- - - -
- - - Previous post
- - Tip of the Week: Automate Software Workflows with GitHub Actions - - -
- - - Next post
- - Tip of the Week: Python Packaging as Publishing - - -
-
-
- - -
- - - - - - - diff --git a/preview/pr-29/2023/09/05/Python-Packaging-as-Publishing.html b/preview/pr-29/2023/09/05/Python-Packaging-as-Publishing.html deleted file mode 100644 index b121a4655b..0000000000 --- a/preview/pr-29/2023/09/05/Python-Packaging-as-Publishing.html +++ /dev/null @@ -1,1106 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Tip of the Week: Python Packaging as Publishing | Software Engineering Team - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - Software Engineering Team - - - CU Dept. of Biomedical Informatics - - - - - - - - -
- -
- - - - - - - - - - - - - -
-

Tip of the Week: Python Packaging as Publishing

- - - - - - - - -
- - - - - -
- - - -

Tip of the Week: Python Packaging as Publishing

- -
- - -
- -

Each week we seek to provide a software tip of the week geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out to -#software-engineering on Slack or email DBMISoftwareEngineering at olucdenver.onmicrosoft.com

- -
-
- - - -

Python packaging is the craft of preparing for and reaching distribution of your Python work to wider audiences. Following conventions for packaging help your software work become more understandable, trustworthy, and connected (to others and their work). Taking advantage of common packaging practices also strengthens our collective superpowers: collaboration. This post will cover preparation aspects of packaging, readying software work for wider distribution.

- - - -

TLDR (too long, didn’t read);

- -

Use Pythonic packaging tools and techniques to help avoid code decay and unwanted code smells and increase your development velocity. Increase understanding with unsurprising directory structures like those exhibited in pypa/sampleproject or scientific-python/cookie. Enhance trust by being authentic on source control systems like GitHub (by customizing your profile), staying up to date with the latest supported versions of Python, and using security linting tools like PyCQA/bandit through visible + automated GitHub Actions ✅ checks. Connect your projects to others using CITATION.cff files, CONTRIBUTING.md files, and using environment + packaging tools like poetry to help others reproduce the same results from your code.

- -

Why practice packaging?

- -
- - How are a page with some text and a book different? - - -
- How are a page with some text and a book different? - -
- -
- -

The practice of Python packaging efforts is similar to that of publishing a book. Consider how a bag of text is different from a book. How and why are these things different?

- - - -
- - Code undergoing packaging to achieve understanding, trust, and connection for an audience. - - -
- Code undergoing packaging to achieve understanding, trust, and connection for an audience. - -
- -
- -

These can be thought of metaphors when it comes to packaging in Python. Books have a smell which sometimes comes from how it was stored, treated, or maintained. While there are pleasant book smells, they might also smell soggy from being left in the rain or stored without maintenance for too long. Just like books, software can sometimes have negative code smells indicating a lack of care or less sustainable condition. Following good packaging practices helps to avoid unwanted code smells while increasing development velocity, maintainability of software through understandability, trustworthiness of the content, and connection to other projects.

- -
- - -
- -

Note: these techniques can also work just as well for inner source collaboration (private or proprietary development within organizations)! Don’t hesitate to use these on projects which may not be public facing in order to make development and maintenance easier (if only for you).

- -
-
- -
- - -
- -

“Wait, what are Python packages?”

- -
my_package/
-│   __init__.py
-│   module_a.py
-│   module_b.py
-
- -

A Python package is a collection of modules (.py files) that usually include an “initialization file” __init__.py. This post will cover the craft of packaging which can include one or many packages.

- -
-
- -

Understanding: common directory structures

- -
project_directory
-├── README.md
-├── LICENSE.txt
-├── pyproject.toml
-├── docs
-│   └── source
-│       └── index.md
-├── src
-│   └── package_name
-│       └── __init__.py
-│       └── module_a.py
-└── tests
-    └── __init__.py
-    └── test_module_a.py
-
- -

Python Packaging today generally assumes a specific directory design. -Following this convention generally improves the understanding of your code. We’ll cover each of these below.

- -

Project root files

- -
project_directory
-├── README.md
-├── LICENSE.txt
-├── pyproject.toml
-│ ...
-
- - - -

Project sub-directories

- -
project_directory
-│ ...
-├── docs
-│   └── source
-│       └── index.md
-├── src
-│   └── package_name
-│       └── __init__.py
-│       └── module_a.py
-└── tests
-    └── __init__.py
-    └── test_module_a.py
-
- - - -

Common directory structure examples

- -

The Python directory structure described above can be witnessed in the wild from the following resources. These can serve as a great resource for starting or adjusting your own work.

- - - -

Trust: building audience confidence

- -
- - How much does your audience trust your work?. - - -
- How much does your audience trust your work?. - -
- -
- -

Building an understandable body of content helps tremendously with audience trust. What else can we do to enhance project trust? The following elements can help improve an audience’s trust in packaged Python work.

- -

Source control authenticity

- -
- - Comparing the difference between a generic or anonymous user and one with greater authenticity. - - -
- Comparing the difference between a generic or anonymous user and one with greater authenticity. - -
- -
- -

Be authentic! Fill out your profile to help your audience know the author and why you do what you do. See here for GitHub’s documentation on filling out your profile. Doing this may seem irrelevant but can go a long way to making technical work more relatable.

- - - -

Staying up to date with supported Python releases

- -
- - Major Python releases and their support status. - - -
- Major Python releases and their support status. - -
- -
- -

Use Python versions which are supported (this changes over time). -Python versions which are end-of-life may be difficult to support and are a sign of code decay for projects. Specify the version of Python which is compatiable with your project by using environment specifications such as pyproject.toml files and related packaging tools (more on this below).

- - - -

Security linting and visible checks with GitHub Actions

- -
- - Make an effort to inspect your package for known security issues. - - -
- Make an effort to inspect your package for known security issues. - -
- -
- -

Use security vulnerability linters to help prevent undesirable or risky processing for your audience. Doing this both practical to avoid issues and conveys that you care about those using your package!

- - - -
- - The green checkmark from successful GitHub Actions runs can offer a sense of reassurance to your audience. - - -
- The green checkmark from successful GitHub Actions runs can offer a sense of reassurance to your audience. - -
- -
- -

Combining GitHub actions with security linters and tests from your software validation suite can add an observable ✅ for your project. -This provides the audience with a sense that you’re transparently testing and sharing results of those tests.

- - - -

Connection: personal and inter-package relationships

- -
- - How does your package connect with other work and people? - - -
- How does your package connect with other work and people? - -
- -
- -

Understandability and trust set the stage for your project’s connection to other people and projects. What can we do to facilitate connection with our project? Use the following techniques to help enhance your project’s connection to others and their work.

- -

Acknowledging authors and referenced work with CITATION.cff

- -
- - figure image - - -
- -

Add a CITATION.cff file to your project root in order to describe project relationships and acknowledgements in a standardized way. The CFF format is also GitHub compatible, making it easier to cite your project.

- - - -

Reaching collaborators using CONTRIBUTING.md

- -
- - CONTRIBUTING.md documents can help you collaborate with others. - - -
- CONTRIBUTING.md documents can help you collaborate with others. - -
- -
- -

Provide a CONTRIBUTING.md file to your project root so as to make clear support details, development guidance, code of conduct, and overall documentation surrounding how the project is governed.

- - - -

Environment management reproducibility as connected project reality

- -
- - Environment and packaging managers can help you connect with your audience. - - -
- Environment and packaging managers can help you connect with your audience. - -
- -
- -

Code without an environment specification is difficult to run in a consistent way. This can lead to “works on my machine” scenarios where different things happen for different people, reducing the chance that people can connect with a shared reality for how your code should be used.

- -
-

“But why do we have to switch the way we do things?” -We’ve always been switching approaches (software approaches evolve over time)! A brief history of Python environment and packaging tooling:

- -
    -
  1. -distutils, easy_install + setup.py
    (primarily used during 1990’s - early 2000’s)
  2. -
  3. -pip, setup.py + requirements.txt
    (primarily used during late 2000’s - early 2010’s)
  4. -
  5. -poetry + pyproject.toml
    (began use around late 2010’s - ongoing)
  6. -
-
- -

Using Python poetry for environment and packaging management

- -
- - figure image - - -
- -

Poetry is one Pythonic environment and packaging manager which can help increase reproducibility using pyproject.toml files. It’s one of many other alternatives such as hatch and pipenv.

- -
-poetry directory structure template use
- -
user@machine % poetry new --name=package_name --src .
-Created package package_name in .
-
-user@machine % tree .
-.
-├── README.md
-├── pyproject.toml
-├── src
-│   └── package_name
-│       └── __init__.py
-└── tests
-    └── __init__.py
-
- -

After installation, Poetry gives us the ability to initialize a directory structure similar to what we presented earlier by using the poetry new ... command. If you’d like a more interactive version of the same, use the poetry init command to fill out various sections of your project with detailed information.

- -
-poetry format for project pyproject.toml -
- -
# pyproject.toml
-[tool.poetry]
-name = "package-name"
-version = "0.1.0"
-description = ""
-authors = ["username <email@address>"]
-readme = "README.md"
-packages = [{include = "package_name", from = "src"}]
-
-[tool.poetry.dependencies]
-python = "^3.9"
-
-[build-system]
-requires = ["poetry-core"]
-build-backend = "poetry.core.masonry.api"
-
- -

Using the poetry new ... command also initializes the content of our pyproject.toml file with opinionated details (following the recommendation from earlier in the article regarding declared Python version specification).

- -
-poetry dependency management
- -
user@machine % poetry add pandas
-
-Creating virtualenv package-name-1STl06GY-py3.9 in /pypoetry/virtualenvs
-Using version ^2.1.0 for pandas
-
-...
-
-Writing lock file
-
- -

We can add dependencies directly using the poetry add ... command. This command also provides the possibility of using a group flag (for example poetry add pytest --group testing) to help organize and distinguish multiple sets of dependencies.

- - - -
Running Python from the context of poetry environments
- -
% poetry run python -c "import pandas; print(pandas.__version__)"
-
-2.1.0
-
- -

We can invoke the virtual environment directly using the poetry run ... command.

- - - -
Building source code with poetry -
- -
% pip install git+https://github.com/project/package_name
-
- -

Even if we don’t reach wider distribution on PyPI or elsewhere, source code managed by pyproject.toml and poetry can be used for “manual” distribution (with reproducible results) from GitHub repositories. When we’re ready to distribute pre-built packages on other networks we can also use the following:

- -
% poetry build
-
-Building package-name (0.1.0)
-  - Building sdist
-  - Built package_name-0.1.0.tar.gz
-  - Building wheel
-  - Built package_name-0.1.0-py3-none-any.whl
-
- -

Poetry readies source-code and pre-compiled versions of our code for distribution platforms like PyPI by using the poetry build ... command. We’ll cover more on these files and distribution steps with a later post!

-
- - - - - -
- - - -
- - - Previous post
- - Tip of the Week: Using Python and Anaconda with the Alpine HPC Cluster - - -
- - - Next post
- - Tip of the Week: Data Quality Validation through Software Testing Techniques - - -
-
-
- - -
- - - - - - - diff --git a/preview/pr-29/2023/10/04/Data-Quality-Validation.html b/preview/pr-29/2023/10/04/Data-Quality-Validation.html deleted file mode 100644 index dbc78b5635..0000000000 --- a/preview/pr-29/2023/10/04/Data-Quality-Validation.html +++ /dev/null @@ -1,893 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Tip of the Week: Data Quality Validation through Software Testing Techniques | Software Engineering Team - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - Software Engineering Team - - - CU Dept. of Biomedical Informatics - - - - - - - - -
- -
- - - - - - - - - - - - - -
-

Tip of the Week: Data Quality Validation through Software Testing Techniques

- - - - - - - - -
- - - - - -
- - - -

Tip of the Week: Data Quality Validation through Software Testing Techniques

- -
- - -
- -

Each week we seek to provide a software tip of the week geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out to -#software-engineering on Slack or email DBMISoftwareEngineering at olucdenver.onmicrosoft.com

- -
-
- -

TLDR (too long, didn’t read);

- -

Implement data quality validation through software testing approaches which leverage ideas surrounding Hoare triples and Design by contract (DbC). Balancing reusability through component-based design data testing with Great Expectations or Assertr. For greater specificity in your data testing, use database schema-like verification through Pandera or a JSON Schema validator. When possible, practice shift-left testing on data sources by through the concept of “database(s) as code” via tools like Data Version Control (DVC) and Flyway.

- -

Introduction

- -

- -

Diagram showing input, in-process data, and output data as a workflow.

- - -

Data orientated software development can benefit from a specialized focus on varying aspects of data quality validation. -We can use software testing techniques to validate certain qualities of the data in order to meet a declarative standard (where one doesn’t need to guess or rediscover known issues). -These come in a number of forms and generally follow existing software testing concepts which we’ll expand upon below. -This article will cover a few tools which leverage these techniques for addressing data quality validation testing. -

-

Data Quality Testing Concepts

- -

Hoare Triple

- -

- -

One concept we’ll use to present these ideas is Hoare logic, which is a system for reasoning on software correctness. -Hoare logic includes the idea of a Hoare triple ($ {\displaystyle {P}C{Q}} $) where $ {\displaystyle {P}} $ is an assertion of precondition, $ {\displaystyle \ C} $ is a command, and $ {\displaystyle {Q}} $ is a postcondition assertion. -Software development using data often entails (sometimes assumed) assertions of precondition from data sources, a transformation or command which changes the data, and a (sometimes assumed) assertion of postcondition in a data output or result.

- -

Design by Contract

- -

- -

Data testing through design by contract over Hoare triple.

- -

Hoare logic and Software correctness help describe design by contract (DbC), a software approach involving the formal specification of “contracts” which help ensure we meet our intended goals. -DbC helps describe how to create assertions when proceeding through Hoare triplet states for data. -These concepts provide a framework for thinking about the tools mentioned below.

- -

Data Component Testing

- -

- -

Diagram showing data contracts as generalized and reusable “component” testing being checked through contracts and raising an error if they aren’t met or continuing operations if they are met.

- -

We often need to verify a certain component’s surrounding data in order to ensure it meets minimum standards. -The word “component” is used here from the context of component-based software design to group together reusable, modular qualities of the data where sometimes we don’t know (or want) to specify granular aspects (such as schema, type, column name, etc). -These components often are implied by software which will eventually use the data, which can emit warnings or errors when they find the data does not meet these standards. -Oftentimes these components are contracts checking postconditions of earlier commands or procedures, ensuring the data we receive is accurate to our intention. -We can avoid these challenges by creating contracts for our data to verify the components of the result before it reaches later stages.

- -

Examples of these data components might include:

- - - -

Data Component Testing - Great Expectations

- -
"""
-Example of using Great Expectations
-Referenced with modifications from: 
-https://docs.greatexpectations.io/docs/tutorials/quickstart/
-"""
-import great_expectations as gx
-
-# get gx DataContext
-# see: https://docs.greatexpectations.io/docs/terms/data_context
-context = gx.get_context()
-
-# set a context data source 
-# see: https://docs.greatexpectations.io/docs/terms/datasource
-validator = context.sources.pandas_default.read_csv(
-    "https://raw.githubusercontent.com/great-expectations/gx_tutorials/main/data/yellow_tripdata_sample_2019-01.csv"
-)
-
-# add and save expectations 
-# see: https://docs.greatexpectations.io/docs/terms/expectation
-validator.expect_column_values_to_not_be_null("pickup_datetime")
-validator.expect_column_values_to_be_between("passenger_count", auto=True)
-validator.save_expectation_suite()
-
-# checkpoint the context with the validator
-# see: https://docs.greatexpectations.io/docs/terms/checkpoint
-checkpoint = context.add_or_update_checkpoint(
-    name="my_quickstart_checkpoint",
-    validator=validator,
-)
-
-# gather checkpoint expectation results
-checkpoint_result = checkpoint.run()
-
-# show the checkpoint expectation results
-context.view_validation_result(checkpoint_result)
-
- -

Example code leveraging Python package Great Expectations to perform various data component contract validation.

- -

Great Expectations is a Python project which provides data contract testing features through the use of component called “expectations” about the data involved. -These expectations act as a standardized way to define and validate the component of the data in the same way across different datasets or projects. -In addition to providing a mechanism for validating data contracts, Great Expecations also provides a way to view validation results, share expectations, and also build data documentation. -See the above example for a quick code reference of how these work.

- -

Data Component Testing - Assertr

- -
# Example using the Assertr package
-# referenced with modifications from:
-# https://docs.ropensci.org/assertr/articles/assertr.html
-library(dplyr)
-library(assertr)
-
-# set our.data to reference the mtcars dataset
-our.data <- mtcars
-
-# simulate an issue in the data for contract specification
-our.data$mpg[5] <- our.data$mpg[5] * -1
-
-# use verify to validate that column mpg >= 0
-our.data %>%
-  verify(mpg >= 0)
-
-# use assert to validate that column mpg is within the bounds of 0 to infinity
-our.data %>%
-  assert(within_bounds(0,Inf), mpg)
-
- -

Example code leveraging R package Assertr to perform various data component contract validation.

- -

Assertr is an R project which provides similar data component assertions in the form of verify, assert, and insist methods (see here for more documentation). -Using Assertr enables a similar but more lightweight functionality to that of Great Expectations. -See the above for an example of how to use it in your projects.

- -

Data Schema Testing

- -

- -

Diagram showing data contracts as more granular specifications via “schema” testing being checked through contracts and raising an error if they aren’t met or continuing operations if they are met.

- -

Sometimes we need greater specificity than what a data component can offer. -We can use data schema testing contracts in these cases. -The word “schema” here is used from the context of database schema, but oftentimes these specifications are suitable well beyond solely databases (including database-like formats like dataframes). -While reuse and modularity are more limited with these cases, they can be helpful for efforts where precision is valued or necessary to accomplish your goals. -It’s worth mentioning that data schema and component testing tools often have many overlaps (meaning you can interchangeably use them to accomplish both tasks).

- -

Data Schema Testing - Pandera

- -
"""
-Example of using the Pandera package
-referenced with modifications from:
-https://pandera.readthedocs.io/en/stable/try_pandera.html
-"""
-import pandas as pd
-import pandera as pa
-from pandera.typing import DataFrame, Series
-
-
-# define a schema
-class Schema(pa.DataFrameModel):
-    item: Series[str] = pa.Field(isin=["apple", "orange"], coerce=True)
-    price: Series[float] = pa.Field(gt=0, coerce=True)
-
-
-# simulate invalid dataframe
-invalid_data = pd.DataFrame.from_records(
-    [{"item": "applee", "price": 0.5}, 
-     {"item": "orange", "price": -1000}]
-)
-
-
-# set a decorator on a function which will
-# check the schema as a precondition
-@pa.check_types(lazy=True)
-def precondition_transform_data(data: DataFrame[Schema]):
-    print("here")
-    return data
-
-
-# precondition schema testing
-try:
-    precondition_transform_data(invalid_data)
-except pa.errors.SchemaErrors as schema_excs:
-    print(schema_excs)
-
-# inline or implied postcondition schema testing
-try:
-    Schema.validate(invalid_data)
-except pa.errors.SchemaError as schema_exc:
-    print(schema_exc)
-
- -

Example code leveraging Python package Pandera to perform various data schema contract validation.

- -

DataFrame-like libraries like Pandas can verified using schema specification contracts through Pandera (see here for full DataFrame library support). -Pandera helps define specific columns, column types, and also has some component-like features. -It leverages a Pythonic class specification, similar to data classes and pydantic models, making it potentially easier to use if you already understand Python and DataFrame-like libraries. -See the above example for a look into how Pandera may be used.

- -

Data Schema Testing - JSON Schema

- -
# Example of using the jsonvalidate R package.
-# Referenced with modifications from:
-# https://docs.ropensci.org/jsonvalidate/articles/jsonvalidate.html
-
-schema <- '{
-  "$schema": "https://json-schema.org/draft/2020-12/schema",
-  "title": "Hello World JSON Schema",
-  "description": "An example",
-  "type": "object",
-  "properties": {
-    "hello": {
-      "description": "Provide a description of the property here",
-      "type": "string"
-    }
-  },
-  "required": [
-    "hello"
-  ]
-}'
-
-# create a schema contract for data
-validate <- jsonvalidate::json_validator(schema, engine = "ajv")
-
-# validate JSON using schema specification contract and invalid data
-validate("{}")
-
-# validate JSON using schema specification contract and valid data
-validate("{'hello':'world'}")
-
- -

JSON Schema provides a vocabulary way to validate schema contracts for JSON documents. -There are several implementations of the vocabulary, including Python package jsonschema, and R package jsonvalidate. -Using these libraries allows you to define pre- or postcondition data schema contracts for your software work. -See above for an R based example of using this vocabulary to perform data schema testing.

- -

Shift-left Data Testing

- -

- -

Earlier portions of this article have covered primarily data validation of command side-effects and postconditions. -This is commonplace in development where data sources usually are provided without the ability to validate their precondition or definition. -Shift-left testing is a movement which focuses on validating earlier in the lifecycle if and when possible to avoid downstream issues which might occur.

- -

Shift-left Data Testing - Data Version Control (DVC)

- -

- -

Data sources undergoing frequent changes become difficult to use because we oftentimes don’t know when the data is from or what version it might be. -This information is sometimes added in the form of filename additions or an update datetime column in a table. -Data Version Control (DVC) is one tool which is specially purposed to address this challenge through source control techniques. -Data managed by DVC allows software to be built in such a way that version preconditions are validated before reaching data transformations (commands) or postconditions.

- -

Shift-left Data Testing - Flyway

- -

- -

Database sources can leverage an idea nicknamed “database as code” (which builds on a similar idea about infrastructure as code) to help declare the schema and other elements of a database in the same way one would code. -These ideas apply to both databases and also more broadly through DVC mentioned above (among other tools) via the concept “data as code”. -Implementing this idea has several advantages from source versioning, visibility, and replicability. -One tool which implements these ideas is Flyway which can manage and implement SQL-based files as part of software data precondition validation. -A lightweight alternative to using Flyway is sometimes to include a SQL file which creates related database objects and becomes data documentation.

-
- - - - - -
- - - -
- - - Previous post
- - Tip of the Week: Python Packaging as Publishing - - -
- - - Next post
- - Tip of the Week: Codesgiving - Open-source Contribution Walkthrough - - -
-
-
- - -
- - - - - - - diff --git a/preview/pr-29/2023/11/15/Codesgiving-Open-source-Contribution-Walkthrough.html b/preview/pr-29/2023/11/15/Codesgiving-Open-source-Contribution-Walkthrough.html deleted file mode 100644 index 7a611f2522..0000000000 --- a/preview/pr-29/2023/11/15/Codesgiving-Open-source-Contribution-Walkthrough.html +++ /dev/null @@ -1,1042 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Tip of the Week: Codesgiving - Open-source Contribution Walkthrough | Software Engineering Team - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - Software Engineering Team - - - CU Dept. of Biomedical Informatics - - - - - - - - -
- -
- - - - - - - - - - - - - -
-

Tip of the Week: Codesgiving - Open-source Contribution Walkthrough

- - - - - - - - -
- - - - - -
- - - -

Tip of the Week: Codesgiving - Open-source Contribution Walkthrough

- -
- - -
- -

Each week we seek to provide a software tip of the week geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out to -#software-engineering on Slack or email DBMISoftwareEngineering at olucdenver.onmicrosoft.com

- -
-
- -

Introduction

- -
- - What good harvests from open-source have you experienced this year? - - -
- What good harvests from open-source have you experienced this year? - -
- -
- - -

Thanksgiving is a holiday practiced in many countries which focuses on gratitude for good harvests of the preceding year. -In the United States, we celebrate Thanksgiving on the fourth Thursday of November each year often by eating meals we create together with others. -This post channels the spirit of Thanksgiving by giving our thanks through code as a “Codesgiving”, acknowledging and creating better software together. -

- -

Giving Thanks to Open-source Harvests

- -

- -

Part of building software involves the use of code which others have built, maintained, and distributed for a wider audience. -Using other people’s work often comes in the form of open-source “harvesting” as we find solutions to software challenges we face. -Examples might include installing and depending upon Python packages from PyPI or R packages from CRAN within your software projects.

- -
-

“Real generosity toward the future lies in giving all to the present.” -- Albert Camus

-
- -

These open-source projects have internal costs which are sometimes invisible to those who consume them. -Every software project has an implied level of software gardening time costs involved to impede decay, practice continuous improvements, and evolve the work. -One way to actively share our thanks for the projects we depend on is through applying our time towards code contributions on them.

- -

Many projects are in need of additional people’s thinking and development time. -Have you ever noticed something that needs to be fixed or desirable functionality in a project you use? -Consider adding your contributions to open-source!

- -

All Contributions Matter

- -

- -

Contributing to open-source can come in many forms and contributions don’t need to be gigantic to make an impact. -Software often involves simplifying complexity. -Simplification requires many actions beyond solely writing code. -For example, a short walk outside, a conversation with someone, or a nap can sometimes help us with breakthroughs when it comes to development. -By the same token, open-source benefits greatly from communications on discussion boards, bug or feature descriptions, or other work that might not be strictly considered “engineering”.

- -

An Open-source Contribution Approach

- -

- -

The troubleshooting process as a workflow involving looped checks for verifying an issue and validating the solution fixes an issue.

- -

It can feel overwhelming to find a way to contribute to open-source. -Similar to other software methodology, modularizing your approach can help you progress without being overwhelmed. -Using a troubleshooting approach like the above can help you break down big challenges into bite-sized chunks. -Consider each step as a “module” or “section” which needs to be addressed sequentially.

- -

Embrace a Learning Mindset

- -
-

“Before you speak ask yourself if what you are going to say is true, is kind, is necessary, is helpful. If the answer is no, maybe what you are about to say should be left unsaid.” -- Bernard Meltzer

-
- -

Open-source contributions almost always entail learning of some kind. -Many contributions happen solely in the form of code and text communications which are easily misinterpreted. -Assume positive intent and accept input from others while upholding your own ideas to share successful contributions together. -Prepare yourself by intentionally opening your mind to input from others, even if you’re sure you’re absolutely “right”.

- -
- - -
- -

Before communicating, be sure to use Bernard Meltzer’s self-checks mentioned above.

- -
    -
  1. Is what I’m about to say true? -
      -
    • Have I taken time to verify the claims in a way others can replicate or understand?
    • -
    -
  2. -
  3. Is what I’m about to say kind? -
      -
    • Does my intention and communication channel kindness (and not cruelty)?
    • -
    -
  4. -
  5. Is what I’m about to say necessary? -
      -
    • Do my words and actions here enable or enhance progress towards a goal (would the outcome be achieved without them)?
    • -
    -
  6. -
  7. Is what I’m about to say helpful? -
      -
    • How does my communication increase the quality or sustainability of the project (or group)?
    • -
    -
  8. -
- -
-
- -

Setting Software Scheduling Expectations

- - - - - - - -
- - - -

Suggested ratio of time spent by type of work for an open-source contribution.

- -
    -
  1. 1/3 planning (~33%)
  2. -
  3. 1/6 coding (~16%)
  4. -
  5. 1/4 component and system testing (25%)
  6. -
  7. 1/4 code review, revisions, and post-actions (25%)
  8. -
- -

This modified rule of thumb from The Mythical Man Month can assist with how you structure your time for an open-source contribution. -Notice the emphasis on planning and testing and keep these in mind as you progress (the actual programming time can be small if adequate time has been spent on planning). -Notably, the original time fractions are modified here with the final quarter of the time spent suggested as code review, revisions, and post-actions. -Planning for the time expense of the added code review and related elements assists with keeping a learning mindset throughout the process (instead of feeling like the review is a “tack-on” or “optional / supplementary”). -A good motto to keep in mind throughout this process is Festina lente, or “Make haste, slowly.” (take care to move thoughtfully and as slowly as necessary to do things correctly the first time).

- -

Planning an Open-source Contribution

- -

Has the Need Already Been Reported?

- -

- -

Be sure to check whether the bug or feature has already been reported somewhere! -In a way, this is a practice of “Don’t repeat yourself” (DRY) where we attempt to avoid repeating the same block of code (in this case, the “code” can be understood as natural language). -For example, you can look on GitHub Issues or GitHub Discussions with a search query matching the rough idea of what you’re thinking about. -You can also use the GitHub search bar to automatically search multiple areas (including Issues, Discussions, Pull Requests, etc.) when you enter a query from the repository homepage. -If it has been reported already, take a look to see if someone has made a code contribution related to the work already.

- -

An open discussion or report of the need doesn’t guarantee someone’s already working on a solution. -If there aren’t yet any code contributions and it doesn’t look like anyone is working on one, consider volunteering to take a further look into the solution and be sure to acknowledge any existing discussions. -If you’re unsure, it’s always kind to mention your interest in the report and ask for more information.

- -

Is the Need a Bug or Feature?

- - - - -
- - - -
- -

One way to help solidify your thinking and the approach is to consider whether what you’re proposing is a bug or a feature. -A software bug is considered something which is broken or malfunctioning. -A software feature is generally considered new functionality or a different way of doing things than what exists today. -There’s often overlap between these, and sometimes they can inspire branching needs, but individually they usually are more of one than the other. -If you can’t decide whether your need is a bug or a feature, consider breaking it down into smaller sub-components so they can be more of one or the other. -Following this strategy will help you communicate the potential for contribution and also clarify the development process (for example, a critical bug might be prioritized differently than a nice-to-have new feature).

- -

Reporting the Need for Change

- -
# Using `function_x` with `library_y` causes `exception_z`
-
-## Summary
-
-As a `library_y` research software developer I want to use `function_x` 
-for my data so that I can share data for research outcomes.
-
-## Reproducing the error
-
-This error may be seen using Python v3.x on all major OS's using
-the following code snippet:
-...
-
-
- -

An example of a user story issue report with imagined code example.

- -

Open-source needs are often best reported through written stories captured within a bug or feature tracking system (such as GitHub Issues) which if possible also include example code or logs. -One template for reporting issues is through a “user story”. -A user story typically comes in the form: As a < type of user >, I want < some goal > so that < some reason >. (Mountain Goat Software: User Stories). -Alongside the story, it can help to add in a snippet of code which exemplifies a problem, new functionality, or a potential adjacent / similar solution. -As a general principle, be as specific as you can without going overboard. -Include things like programming language version, operating system, and other system dependencies that might be related.

- -

Once you have a good written description of the need, be sure to submit it where it can be seen by the relevant development community. -For GitHub-based work, this is usually a GitHub Issue, but can also entail discussion board posts to gather buy-in or consensus before proceeding. -In addition to the specifics outlined above, also recall the learning mindset and Bernard Meltzer’s self-checks, taking time to acknowledge especially the potential challenges and already attempted solutions associated with the description (conveying kindness throughout).

- -

What Happens After You Submit a Bug or Feature Report?

- -

- -

When making open-source contributions, sometimes it can also help to mention that you’re interested in resolving the issue through a related pull request and review. -Oftentimes open-source projects welcome new contributors but may have specific requirements. -These requirements are usually spelled out within a CONTRIBUTING.md document found somewhere in the repository or the organization level documentation. -It’s also completely okay to let other contributors build solutions for the issue (like we mentioned before, all contributions matter, including the reporting of bugs or features themselves)!

- -

Developing and Testing an Open-source Contribution

- -

Creating a Development Workspace

- -

- -

Once ready to develop a solution for the reported need in the open-source project you’ll need a place to version your updates. -This work generally takes place through version control on focused branches which are named in a way that relates to the focus. -When working on GitHub, this work also commonly takes place on forked repository copies. -Using these methods helps isolate your changes from other work that takes place within the project. -It also can help you track your progress alongside related changes that might take place before you’re able to seek review or code merges.

- -

Bug or Feature Verification with Test-driven Development

- -
- - -
- -

One can use a test-driven development approach as numbered steps (Wikipedia).

- -
-
    -
  1. Add or modify a test which checks for a bug fix or feature addition
  2. -
  3. Run all tests (expecting the newly added test content to fail)
  4. -
  5. Write a simple version of code which allows the tests to succeed
  6. -
  7. Verify that all tests now pass
  8. -
  9. Return to step 3, refactoring the code as needed
  10. -
-
- - -
-
- -

If you decide to develop a solution for what you reported, one software strategy which can help you remain focused and objective is test-driven development. -Using this pattern sets a “cognitive milestone” for you as you develop a solution to what was reported. -Open-source projects can have many interesting components which could take time and be challenging to understand. -The addition of the test and related development will help keep you goal-orientated without getting lost in the “software forest” of a project.

- -

Prefer Simple Over Complex Changes

- -
-

… -Simple is better than complex. -Complex is better than complicated. -… -- PEP 20: The Zen of Python

-
- -

Further channeling step 3. from test-driven development above, prefer simple changes over more complex ones (recognizing that the absolute simplest can take iteration and thought). -Some of the best solutions are often the most easily understood ones (where the code addition or changes seem obvious afterwards). -A “simplest version” of the code can often be more quickly refactored and completed than devising a “perfect” solution the first time. -Remember, you’ll very likely have the help of a code review before the code is merged (expect to learn more and add changes during review!).

- -

It might be tempting to address more than one bug or feature at the same time. -Avoid feature creep as you build solutions - stay focused on the task at hand! -Take note of things you notice on your journey to address the reported needs. -These can be become additional reported bugs or features which could be addressed later. -Staying focused with your development will save you time, keep your tests constrained, and (theoretically) help reduce the time and complexity of code review.

- -

Developing a Solution

- -

- -

Once you have a test in place for the bug fix or feature addition it’s time to work towards developing a solution. -If you’ve taken time to accomplish the prior steps before this point you may already have a good idea about how to go about a solution. -If not, spend some time investigating the technical aspects of a solution, optionally adding this information to the report or discussion content for further review before development. -Use timeboxing techniques to help make sure the time you spend in development is no more than necessary.

- -

Code Review, Revisions, and Post-actions

- -

Pull Requests and Code Review

- -

When your code and new test(s) are in a good spot it’s time to ask for a code review. -It might feel tempting to perfect the code. -Instead, consider whether the code is “good enough” and would benefit from someone else providing feedback. -Code review takes advantage of a strength of our species: collaborative & multi-perspectival thinking. -Leverage this in your open-source experience by seeking feedback when things feel “good enough”.

- -
- - - -

Demonstrating Pareto Principle “vital few” through a small number of changes to achieve 80% of the value associated with the needs.

- -

One way to understand “good enough” is to assess whether you have reached what the Pareto Principle terms as the “vital few” causes. -The Pareto Principle states that roughly 80% of consequences come from 20% of causes (the “vital few”). -What are the 20% changes (for example, as commits) which are required to achieve 80% of the desired intent for development with your open-source contribution? -When you reach those 20% of the changes, consider opening a pull request to gather more insight about whether those changes will suffice and how the remaining effort might be spent.

- -

As you go through the process of opening a pull request, be sure to follow the open-source CONTRIBUTING.md document documentation related to the project; each one can vary. -When working on GitHub-based projects, you’ll need to open a pull request on the correct branch (usually upstream main). -If you used a GitHub issue to help report the issue, mention the issue in the pull request description using the #issue number (for example #123 where the issue link would look like: https://github.com/orgname/reponame/issues/123) reference to help link the work to the reported need. -This will cause the pull request to show up within the issue and automatically create a link to the issue from the pull request.

- -

Code Revisions

- -
-

“Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.” -- Antoine de Saint-Exupery

-
- -

You may be asked to update your code based on automated code quality checks or reviewer request. -Treat these with care; embrace learning and remember that this step can take 25% of the total time for the contribution. -When working on GitHub forks or branches, you can make additional commits directly on the development branch which was used for the pull request. -If your reviewers requested changes, re-request their review once changes have been made to help let them know the code is ready for another look.

- -

Post-actions and Tidying Up Afterwards

- -

- -

Once the code has been accepted by the reviewers and through potential automated testing suite(s) the content is ready to be merged. -Oftentimes this work is completed by core maintainers of the project. -After the code is merged, it’s usually a good idea to clean up your workspace by deleting your development branch and syncing with the upstream repository. -While it’s up to core maintainers to decide on report closure, typically the reported need content can be closed and might benefit from a comment describing the fix. -Many of these steps are considered common courtesy but also, importantly, assist in setting you up for your next contributions!

- -

Concluding Thoughts

- -

Hopefully the above helps you understand the open-source contribution process better. -As stated earlier, every little part helps! -Best wishes on your open-source journey and happy Codesgiving!

- -

References

- - -
- - - - - -
- - - -
- - - Previous post
- - Tip of the Week: Data Quality Validation through Software Testing Techniques - - -
- - - Next post
- - Tip of the Month: Python Memory Management and Troubleshooting - - -
-
-
- - -
- - - - - - - diff --git a/preview/pr-29/2024/01/22/Python-Memory-Management-and-Troubleshooting.html b/preview/pr-29/2024/01/22/Python-Memory-Management-and-Troubleshooting.html deleted file mode 100644 index 13009189d7..0000000000 --- a/preview/pr-29/2024/01/22/Python-Memory-Management-and-Troubleshooting.html +++ /dev/null @@ -1,923 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Tip of the Month: Python Memory Management and Troubleshooting | Software Engineering Team - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - Software Engineering Team - - - CU Dept. of Biomedical Informatics - - - - - - - - -
- -
- - - - - - - - - - - - - -
-

Tip of the Month: Python Memory Management and Troubleshooting

- - - - - - - - -
- - - - - -
- - - -

Tip of the Week: Python Memory Management and Troubleshooting

- -
- - -
- -

Each month we seek to provide a software tip of the month geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out!

- -
-
- -

Introduction

- - -

Have you ever run Python code only to find it taking forever to complete or sometime abruptly ending with an error like: 123456 Killed or killed (program exited with code: 137)? -You may have experienced memory resource or management challenges associated with these scenarios. -This post will cover some computer memory definitions, how Python makes use of computer memory, and share some tools which may help with these types of challenges. -

- -

What is Memory?

- -

Computer Memory

- -

- -

Computer memory is a type of computer resource available for use by software on a computer

- -

Computer memory, also sometimes known as “RAM” or “random-access memory”, or “dynamic memory” is a type of resource used by computer software on a computer. -“Computer memory stores information, such as data and programs for immediate use in the computer. … Main memory operates at a high speed compared to non-memory storage which is slower but less expensive and oftentimes higher in capacity. “ (Wikipedia: Computer memory).

- -
- - - - - - - -</table> - - - - -</table></div> - - - - - - - - - -</table></div> - - - - -</table></div> - -
Memory Blocks
-A.) All memory blocks available.<table> -BlockBlockBlock -B.) Some memory blocks in use.<div class="table-wrapper"><table> -BlockBlockBlock
Practical analogy
-C.) You have limited buckets to hold things.<div class="table-wrapper"><table> -🪣🪣🪣 -D.) Two buckets are used, the other remains empty.<div class="table-wrapper"><table> -🪣🪣🪣
- -

Fixed-size memory blocks may be free or used at various times. They can be thought of like reusable buckets to hold things.

- -

One way to organize computer memory is through the use of “fixed-size blocks”, also called “blocks”. -Fixed-size memory blocks are chunks of memory of a certain byte size (usually all the same size). -Memory blocks may be in use or free at different times.

- -

- -

Memory heaps help organize available memory on a computer for specific procedures. Heaps may have one or many memory pools.

- -

Computer memory blocks may be organized in hierarchical layers to manage memory efficiently or towards a specific purpose. -One top-level organization model for computer memory is through the use of heaps which help describe chunks of the total memory available on a computer for specific processes. -These heaps may be private (only available to a specific software process) or shared (available to one or many software processes). -Heaps are sometimes further segmented into pools which are areas of the heap which can be used for specific purposes.

- -

Memory Allocator

- -

- -

Memory allocators help software reserve and free computer memory resources.

- -

Memory management is a concept which helps enable the shared use of computer memory to avoid challenges such as memory overuse (where all memory is in use and never shared to other software). -Computer memory management often occurs through the use of a memory allocator which controls how computer memory resources are used for software. -Computer software is written to interact with memory allocators to use computer memory. -Memory allocators may be used manually (with specific directions provided on when and how to use memory resources) or automatically (with an algorithmic approach of some kind). -The memory allocator usually performs the following actions with memory (in addition to others):

- -
    -
  • -“Allocation”: computer memory resource reservation (taking memory). This is sometimes also known as “alloc”, or “allocate memory”.
  • -
  • -“Deallocation”: computer memory resource freeing (giving back memory for other uses). This is sometimes also known as “free”, or “freeing memory from allocation”.
  • -
- -

Garbage Collection

- -

- -

Garbage collectors help free computer memory which is no longer referenced by software.

- -

“Garbage collection (GC)” is used to describe a type of automated memory management. -“The garbage collector attempts to reclaim memory which was allocated by the program, but is no longer referenced; such memory is called garbage.” (Wikipedia: Garbage collection (computer science)). -A garbage collector often works in tandem with a memory allocator to help control computer memory resource usage in software development.

- -

How Does Python Interact with Computer Memory?

- -

Python Overview

- -

- -

A Python interpreter executes Python code and manages memory for Python procedures.

- -

Python is an interpreted “high-level” programming language (Python: What is Python?). -Interpreted languages are those which include an “interpreter” which helps execute code written in a particular way (Wikipedia: Interpreter (computing)). -High-level languages such as Python often remove the requirement for software developers to manually perform memory management (Wikipedia: High-level programming language).

- -

Python code is executed by a commonly pre-packaged and downloaded binary call the Python interpreter. -The Python interpreter reads Python code and performs memory management as the code is executed. -The CPython Python interpreter is the most commonly used interpreter for Python, and what’s use as a reference for other content here. -There are also other interpreters such as PyPy, Jython, and IronPython which all handle memory differently than the CPython interpreter.

- -

Python’s Memory Manager

- -

- -

The Python memory manager helps manage memory for Python code executed by the Python interpreter.

- -

Memory is managed for Python software processes automatically (when unspecified) or manually (when specified) through the Python interpreter. -The Python memory manager is an abstraction which manages memory for Python software processes through the Python interpreter (Python: Memory Management). -From a high-level perspective, we assume variables and other operations written in Python will automatically allocate and deallocate memory through the Python interpreter when executed. -The Python memory manager . -Python’s memory manager performs work through various memory allocators and a garbage collector (or as configured with customizations) within a private Python memory heap.

- -

Python’s Memory Allocators

- -

- -

The Python memory manager by default will use pymalloc internally or malloc from the system to allocate computer memory resources.

- -

The Python memory manager allocates memory for use through memory allocators. -Python may use one or many memory allocators depending on specifications in Python code and how the Python interpreter is configured (for example, see Python: Memory Management - Default Memory Allocators). -One way to understand Python memory allocators is through the following distinctions.

- -
    -
  • -“Python Memory Allocator” (pymalloc) -The Python interpreter is packaged with a specialized memory allocator called pymalloc. -“Python has a pymalloc allocator optimized for small objects (smaller or equal to 512 bytes) with a short lifetime.” (Python: Memory Management - The pymalloc allocator). -Ultimately, pymalloc uses C malloc to implement memory work.
  • -
  • -C dynamic memory allocator (malloc) -When pymalloc is disabled or a memory requirements exceed pymalloc’s constraints, the Python interpreter will directly use a function from the C standard library called malloc. -When malloc is used by the Python interpreter, it uses the system’s existing implementation of malloc.
  • -
- -

- -

pymalloc makes use of arenas to further organize pools within a computer memory heap.

- -

It’s important to note that pymalloc adds additional abstractions to how memory is organized through the use of “arenas”. -These arenas are specific to pymalloc purposes. -pymalloc may be disabled through the use of a special environment variable called PYTHONMALLOC (for example, to use only malloc as seen below). -This same environment variable may be used with debug settings in order to help troubleshoot in-depth questions.

- -

Additional Python Memory Allocators

- -

- -

Python code and package dependencies may stipulate the use of additional memory allocators, such as mimalloc and jemalloc outside of the Python memory manager.

- -

Python provides the capability of customizing memory allocation through the use of packages. -See below for some notable examples of additional memory allocation possibilities.

- -
    -
  • -NumPy Memory Allocation -NumPy uses custom C-API’s which are backed by C dynamic memory allocation functions (alloc, free, realloc) to help address memory management. -These interfaces can be controlled directly through NumPy to help manage memory effectively when using the package.
  • -
  • -PyArrow Memory Allocators -PyArrow provides the capability to use malloc, jemalloc, or mimalloc through the PyArrow Memory Pools group of functions. -A default memory allocator is selected for use when PyArrow based on the operating system and the availability of the memory allocator on the system. -The selection of a memory allocator for use with PyArrow can be influenced by how it performs on a particular system.
  • -
- -

Python Reference Counting

- -
- - - - - - - - - - - - - - - - - - -</table> - -_Python reference counting at a simple level works through the use of object reference increments and decrements._ - -As computer memory is allocated to Python processes the Python memory manager keeps track of these through the use of a [reference counter](https://en.wikipedia.org/wiki/Reference_counting). -In Python, we could label this as an "Object reference counter" because all data in Python is represented by objects ([Python: Data model](https://docs.python.org/3/reference/datamodel.html#objects-values-and-types)). -"... CPython counts how many different places there are that have a reference to an object. Such a place could be another object, or a global (or static) C variable, or a local variable in some C function." ([Python Developer's Guide: Garbage collector design](https://devguide.python.org/internals/garbage-collector/)). - -### Python's Garbage Collection - - - -_The Python garbage collector works as part of the Python memory manager to free memory which is no longer needed (based on reference count)._ - -Python by default uses an optional garbage collector to automatically deallocate garbage memory through the Python interpreter in CPython. -"When an object’s reference count becomes zero, the object is deallocated." ([Python Developer's Guide: Garbage collector design](https://devguide.python.org/internals/garbage-collector/)) -Python's garbage collector focuses on collecting garbage created by `pymalloc`, C memory functions, as well as other memory allocators like `mimalloc` and `jemalloc`. - -## Python Tools for Observing Memory Behavior - -### Python Built-in Tools - -```python -import gc -import sys - -# set gc in debug mode for detecting memory leaks -gc.set_debug(gc.DEBUG_LEAK) - -# create an int object -an_object = 1 - -# show the number of uncollectable references via COLLECTED -COLLECTED = gc.collect() -print(f"Uncollectable garbage references: {COLLECTED}") - -# show the reference count for an object -print(f"Reference count of `an_object`: {sys.getrefcount(an_object)}") -``` - -The [`gc` module](https://docs.python.org/3/library/gc.html) provides an interface to the Python garbage collector. -In addition, the [`sys` module](https://docs.python.org/3/library/sys.html) provides many functions which provide information about references and other details about Python objects as they are executed through the interpreter. -These functions and other packages can help software developers observe memory behaviors within Python procedures. - -### Python Package: Scalene - -
- - Scalene provides a web interface to analyze memory, CPU, and GPU resource consumption in one spot alongside suggested areas of concern. - - -
- Scalene provides a web interface to analyze memory, CPU, and GPU resource consumption in one spot alongside suggested areas of concern. - -
- -
- - -[Scalene](https://github.com/plasma-umass/scalene) is a Python package for analyzing memory, CPU, and GPU resource consumption. -It provides [a web interface](https://github.com/plasma-umass/scalene?tab=readme-ov-file#web-based-gui) to help visualize and understand how resources are consumed. -Scalene provides suggestions on which portions of your code to troubleshoot through the web interface. -Scalene can also be configured to work with [OpenAI](https://en.wikipedia.org/wiki/OpenAI) [LLM's](https://en.wikipedia.org/wiki/Large_language_model) by way of a an [OpenAI API provided by the user](https://github.com/plasma-umass/scalene?tab=readme-ov-file#ai-powered-optimization-suggestions). - -### Python Package: Memray - -
- - Memray provides the ability to create and view flamegraphs which show how memory was consumed as a procedure executed. - - -
- Memray provides the ability to create and view flamegraphs which show how memory was consumed as a procedure executed. - -
- -
- - -[Memray](https://github.com/bloomberg/memray) is a Python package to track memory allocation within Python and compiled extension modules. -Memray provides a high-level way to investigate memory performance and adds visualizations such as [flamegraphs](https://www.brendangregg.com/flamegraphs.html)(which contextualization of [stack traces](https://en.wikipedia.org/wiki/Stack_trace) and memory allocations in one spot). -Memray seeks to provide a way to overcome challenges with tracking and understanding Python and other memory allocators (such as C, C++, or Rust libraries used in tandem with a Python process). - -## Concluding Thoughts - -It's worth mentioning that this article covers only a small fraction of how and what memory is as well as how Python might make use of it. -Hopefully it clarifies the process and provides a way to get started with investigating memory within the software you work with. -Wishing you the very best in your software journey with memory! -
Processed line of codeReference count
-
a_string = "cornucopia"
-
-
a_string: 1
-
reference_a_string = a_string
-
-
a_string: 2
-(Because a_string is now referenced twice.)
-
del reference_a_string
-
-
a_string: 1
-(Because the additional reference has been deleted.)
- - - - - - -
- - - -
- - - Previous post
- - Tip of the Week: Codesgiving - Open-source Contribution Walkthrough - - -
- - - -
-
- - - - - - - - -
- - diff --git a/preview/pr-29/404.html b/preview/pr-29/404.html deleted file mode 100644 index b9a29656f5..0000000000 --- a/preview/pr-29/404.html +++ /dev/null @@ -1,481 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -404 | Software Engineering Team - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - Software Engineering Team - - - CU Dept. of Biomedical Informatics - - - - - - - - -
- -
- - - - - - - - - - - - - -
-

- Page Not Found

- -

Try searching the whole site for the content you want:

- -
- - -
-
- - -
- - - - - - - diff --git a/preview/pr-29/_scripts/anchors.js b/preview/pr-29/_scripts/anchors.js deleted file mode 100644 index 904edf9c15..0000000000 --- a/preview/pr-29/_scripts/anchors.js +++ /dev/null @@ -1,47 +0,0 @@ -/* - creates link next to each heading that links to that section. -*/ - -{ - const onLoad = () => { - // for each heading - const headings = document.querySelectorAll( - "h1[id], h2[id], h3[id], h4[id]" - ); - for (const heading of headings) { - // create anchor link - const link = document.createElement("a"); - link.classList.add("icon", "fa-solid", "fa-link", "anchor"); - link.href = "#" + heading.id; - link.setAttribute("aria-label", "link to this section"); - heading.append(link); - - // if first heading in the section, move id to parent section - if (heading.matches("section > :first-child")) { - heading.parentElement.id = heading.id; - heading.removeAttribute("id"); - } - } - }; - - // scroll to target of url hash - const scrollToTarget = () => { - const id = window.location.hash.replace("#", ""); - const target = document.getElementById(id); - - if (!target) return; - const offset = document.querySelector("header").clientHeight || 0; - window.scrollTo({ - top: target.getBoundingClientRect().top + window.scrollY - offset, - behavior: "smooth", - }); - }; - - // after page loads - window.addEventListener("load", onLoad); - window.addEventListener("load", scrollToTarget); - window.addEventListener("tagsfetched", scrollToTarget); - - // when hash nav happens - window.addEventListener("hashchange", scrollToTarget); -} diff --git a/preview/pr-29/_scripts/dark-mode.js b/preview/pr-29/_scripts/dark-mode.js deleted file mode 100644 index b0124d94f2..0000000000 --- a/preview/pr-29/_scripts/dark-mode.js +++ /dev/null @@ -1,28 +0,0 @@ -/* - manages light/dark mode. -*/ - -{ - // save/load user's dark mode preference from local storage - const loadDark = () => window.localStorage.getItem("dark-mode") === "true"; - const saveDark = (value) => window.localStorage.setItem("dark-mode", value); - - // immediately load saved mode before page renders - document.documentElement.dataset.dark = loadDark(); - - const onLoad = () => { - // update toggle button to match loaded mode - document.querySelector(".dark-toggle").checked = - document.documentElement.dataset.dark === "true"; - }; - - // after page loads - window.addEventListener("load", onLoad); - - // when user toggles mode button - window.onDarkToggleChange = (event) => { - const value = event.target.checked; - document.documentElement.dataset.dark = value; - saveDark(value); - }; -} diff --git a/preview/pr-29/_scripts/fetch-tags.js b/preview/pr-29/_scripts/fetch-tags.js deleted file mode 100644 index c843b67fdc..0000000000 --- a/preview/pr-29/_scripts/fetch-tags.js +++ /dev/null @@ -1,67 +0,0 @@ -/* - fetches tags (aka "topics") from a given GitHub repo and adds them to row of - tag buttons. specify repo in data-repo attribute on row. -*/ - -{ - const onLoad = async () => { - // get tag rows with specified repos - const rows = document.querySelectorAll("[data-repo]"); - - // for each repo - for (const row of rows) { - // get props from tag row - const repo = row.dataset.repo.trim(); - const link = row.dataset.link.trim(); - - // get tags from github - if (!repo) continue; - let tags = await fetchTags(repo); - - // filter out tags already present in row - let existing = [...row.querySelectorAll(".tag")].map((tag) => - window.normalizeTag(tag.innerText) - ); - tags = tags.filter((tag) => !existing.includes(normalizeTag(tag))); - - // add tags to row - for (const tag of tags) { - const a = document.createElement("a"); - a.classList.add("tag"); - a.innerHTML = tag; - a.href = `${link}?search="tag: ${tag}"`; - a.dataset.tooltip = `Show items with the tag "${tag}"`; - row.append(a); - } - - // delete tags container if empty - if (!row.innerText.trim()) row.remove(); - } - - // emit "tags done" event for other scripts to listen for - window.dispatchEvent(new Event("tagsfetched")); - }; - - // after page loads - window.addEventListener("load", onLoad); - - // GitHub topics endpoint - const api = "https://api.github.com/repos/REPO/topics"; - const headers = new Headers(); - headers.set("Accept", "application/vnd.github+json"); - - // get tags from GitHub based on repo name - const fetchTags = async (repo) => { - const url = api.replace("REPO", repo); - try { - const response = await (await fetch(url)).json(); - if (response.names) return response.names; - else throw new Error(JSON.stringify(response)); - } catch (error) { - console.groupCollapsed("GitHub fetch tags error"); - console.log(error); - console.groupEnd(); - return []; - } - }; -} diff --git a/preview/pr-29/_scripts/search.js b/preview/pr-29/_scripts/search.js deleted file mode 100644 index fa23ca4c21..0000000000 --- a/preview/pr-29/_scripts/search.js +++ /dev/null @@ -1,215 +0,0 @@ -/* - filters elements on page based on url or search box. - syntax: term1 term2 "full phrase 1" "full phrase 2" "tag: tag 1" - match if: all terms AND at least one phrase AND at least one tag -*/ -{ - // elements to filter - const elementSelector = ".card, .citation, .post-excerpt"; - // search box element - const searchBoxSelector = ".search-box"; - // results info box element - const infoBoxSelector = ".search-info"; - // tags element - const tagSelector = ".tag"; - - // split search query into terms, phrases, and tags - const splitQuery = (query) => { - // split into parts, preserve quotes - const parts = query.match(/"[^"]*"|\S+/g) || []; - - // bins - const terms = []; - const phrases = []; - const tags = []; - - // put parts into bins - for (let part of parts) { - if (part.startsWith('"')) { - part = part.replaceAll('"', "").trim(); - if (part.startsWith("tag:")) - tags.push(normalizeTag(part.replace(/tag:\s*/, ""))); - else phrases.push(part.toLowerCase()); - } else terms.push(part.toLowerCase()); - } - - return { terms, phrases, tags }; - }; - - // normalize tag string for comparison - window.normalizeTag = (tag) => - tag.trim().toLowerCase().replaceAll(/-|\s+/g, " "); - - // get data attribute contents of element and children - const getAttr = (element, attr) => - [element, ...element.querySelectorAll(`[data-${attr}]`)] - .map((element) => element.dataset[attr]) - .join(" "); - - // determine if element should show up in results based on query - const elementMatches = (element, { terms, phrases, tags }) => { - // tag elements within element - const tagElements = [...element.querySelectorAll(".tag")]; - - // check if text content exists in element - const hasText = (string) => - ( - element.innerText + - getAttr(element, "tooltip") + - getAttr(element, "search") - ) - .toLowerCase() - .includes(string); - // check if text matches a tag in element - const hasTag = (string) => - tagElements.some((tag) => normalizeTag(tag.innerText) === string); - - // match logic - return ( - (terms.every(hasText) || !terms.length) && - (phrases.some(hasText) || !phrases.length) && - (tags.some(hasTag) || !tags.length) - ); - }; - - // loop through elements, hide/show based on query, and return results info - const filterElements = (parts) => { - let elements = document.querySelectorAll(elementSelector); - - // results info - let x = 0; - let n = elements.length; - let tags = parts.tags; - - // filter elements - for (const element of elements) { - if (elementMatches(element, parts)) { - element.style.display = ""; - x++; - } else element.style.display = "none"; - } - - return [x, n, tags]; - }; - - // highlight search terms - const highlightMatches = async ({ terms, phrases }) => { - // make sure Mark library available - if (typeof Mark === "undefined") return; - - // reset - new Mark(document.body).unmark(); - - // limit number of highlights to avoid slowdown - let counter = 0; - const filter = () => counter++ < 100; - - // highlight terms and phrases - new Mark(elementSelector) - .mark(terms, { separateWordSearch: true, filter }) - .mark(phrases, { separateWordSearch: false, filter }); - }; - - // update search box based on query - const updateSearchBox = (query = "") => { - const boxes = document.querySelectorAll(searchBoxSelector); - - for (const box of boxes) { - const input = box.querySelector("input"); - const button = box.querySelector("button"); - const icon = box.querySelector("button i"); - input.value = query; - icon.className = input.value.length - ? "icon fa-solid fa-xmark" - : "icon fa-solid fa-magnifying-glass"; - button.disabled = input.value.length ? false : true; - } - }; - - // update info box based on query and results - const updateInfoBox = (query, x, n) => { - const boxes = document.querySelectorAll(infoBoxSelector); - - if (query.trim()) { - // show all info boxes - boxes.forEach((info) => (info.style.display = "")); - - // info template - let info = ""; - info += `Showing ${x.toLocaleString()} of ${n.toLocaleString()} results
`; - info += "Clear search"; - - // set info HTML string - boxes.forEach((el) => (el.innerHTML = info)); - } - // if nothing searched - else { - // hide all info boxes - boxes.forEach((info) => (info.style.display = "none")); - } - }; - - // update tags based on query - const updateTags = (query) => { - const { tags } = splitQuery(query); - document.querySelectorAll(tagSelector).forEach((tag) => { - // set active if tag is in query - if (tags.includes(normalizeTag(tag.innerText))) - tag.setAttribute("data-active", ""); - else tag.removeAttribute("data-active"); - }); - }; - - // run search with query - const runSearch = (query = "") => { - const parts = splitQuery(query); - const [x, n] = filterElements(parts); - updateSearchBox(query); - updateInfoBox(query, x, n); - updateTags(query); - highlightMatches(parts); - }; - - // update url based on query - const updateUrl = (query = "") => { - const url = new URL(window.location); - let params = new URLSearchParams(url.search); - params.set("search", query); - url.search = params.toString(); - window.history.replaceState(null, null, url); - }; - - // search based on url param - const searchFromUrl = () => { - const query = - new URLSearchParams(window.location.search).get("search") || ""; - runSearch(query); - }; - - // return func that runs after delay - const debounce = (callback, delay = 250) => { - let timeout; - return (...args) => { - window.clearTimeout(timeout); - timeout = window.setTimeout(() => callback(...args), delay); - }; - }; - - // when user types into search box - const debouncedRunSearch = debounce(runSearch, 1000); - window.onSearchInput = (target) => { - debouncedRunSearch(target.value); - updateUrl(target.value); - }; - - // when user clears search box with button - window.onSearchClear = () => { - runSearch(); - updateUrl(); - }; - - // after page loads - window.addEventListener("load", searchFromUrl); - // after tags load - window.addEventListener("tagsfetched", searchFromUrl); -} diff --git a/preview/pr-29/_scripts/site-search.js b/preview/pr-29/_scripts/site-search.js deleted file mode 100644 index caff0a611f..0000000000 --- a/preview/pr-29/_scripts/site-search.js +++ /dev/null @@ -1,14 +0,0 @@ -/* - for site search component. searches site/domain via google. -*/ - -{ - // when user submits site search form/box - window.onSiteSearchSubmit = (event) => { - event.preventDefault(); - const google = "https://www.google.com/search?q=site:"; - const site = window.location.origin; - const query = event.target.elements.query.value; - window.location = google + site + " " + query; - }; -} diff --git a/preview/pr-29/_scripts/tooltip.js b/preview/pr-29/_scripts/tooltip.js deleted file mode 100644 index 49eccfc5b8..0000000000 --- a/preview/pr-29/_scripts/tooltip.js +++ /dev/null @@ -1,41 +0,0 @@ -/* - shows a popup of text on hover/focus of any element with the data-tooltip - attribute. -*/ - -{ - const onLoad = () => { - // make sure Tippy library available - if (typeof tippy === "undefined") return; - - // get elements with non-empty tooltips - const elements = [...document.querySelectorAll("[data-tooltip]")].filter( - (element) => element.dataset.tooltip.trim() && !element._tippy - ); - - // add tooltip to elements - tippy(elements, { - content: (element) => element.dataset.tooltip.trim(), - delay: [200, 0], - offset: [0, 20], - allowHTML: true, - interactive: true, - appendTo: () => document.body, - aria: { - content: "describedby", - expanded: null, - }, - onShow: ({ reference, popper }) => { - const dark = reference.closest("[data-dark]")?.dataset.dark; - if (dark === "false") popper.dataset.dark = true; - if (dark === "true") popper.dataset.dark = false; - }, - // onHide: () => false, // debug - }); - }; - - // after page loads - window.addEventListener("load", onLoad); - // after tags load - window.addEventListener("tagsfetched", onLoad); -} diff --git a/preview/pr-29/_styles/-theme.css b/preview/pr-29/_styles/-theme.css deleted file mode 100644 index 64d6f321f4..0000000000 --- a/preview/pr-29/_styles/-theme.css +++ /dev/null @@ -1,41 +0,0 @@ -[data-dark=false] { - --primary: #1e88e5; - --secondary: #90caf9; - --text: #000000; - --background: #ffffff; - --background-alt: #fafafa; - --light-gray: #e0e0e0; - --gray: #808080; - --overlay: #00000020; -} - -[data-dark=true] { - --primary: #64b5f6; - --secondary: #1e88e5; - --text: #ffffff; - --background: #181818; - --background-alt: #1c1c1c; - --light-gray: #404040; - --gray: #808080; - --overlay: #ffffff10; -} - -:root { - --title: "Barlow", sans-serif; - --heading: "Barlow", sans-serif; - --body: "Barlow", sans-serif; - --code: "Roboto Mono", monospace; - --medium: 1rem; - --large: 1.2rem; - --xl: 1.4rem; - --xxl: 1.6rem; - --thin: 200; - --regular: 400; - --semi-bold: 500; - --bold: 600; - --spacing: 2; - --rounded: 5px; - --shadow: 0 0 10px 0 var(--overlay); -} - -/*# sourceMappingURL=-theme.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/-theme.css.map b/preview/pr-29/_styles/-theme.css.map deleted file mode 100644 index b2c6823bb4..0000000000 --- a/preview/pr-29/_styles/-theme.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["-theme.scss"],"names":[],"mappings":"AACA;EACE;EACA;EACA;EACA;EACA;EACA;EACA;EACA;;;AAEF;EACE;EACA;EACA;EACA;EACA;EACA;EACA;EACA;;;AAGF;EAEE;EACA;EACA;EACA;EAGA;EACA;EACA;EACA;EAGA;EACA;EACA;EACA;EAGA;EAGA;EACA","sourcesContent":["// colors\n[data-dark=\"false\"] {\n --primary: #1e88e5;\n --secondary: #90caf9;\n --text: #000000;\n --background: #ffffff;\n --background-alt: #fafafa;\n --light-gray: #e0e0e0;\n --gray: #808080;\n --overlay: #00000020;\n}\n[data-dark=\"true\"] {\n --primary: #64b5f6;\n --secondary: #1e88e5;\n --text: #ffffff;\n --background: #181818;\n --background-alt: #1c1c1c;\n --light-gray: #404040;\n --gray: #808080;\n --overlay: #ffffff10;\n}\n\n:root {\n // font families\n --title: \"Barlow\", sans-serif;\n --heading: \"Barlow\", sans-serif;\n --body: \"Barlow\", sans-serif;\n --code: \"Roboto Mono\", monospace;\n\n // font sizes\n --medium: 1rem;\n --large: 1.2rem;\n --xl: 1.4rem;\n --xxl: 1.6rem;\n\n // font weights\n --thin: 200;\n --regular: 400;\n --semi-bold: 500;\n --bold: 600;\n\n // text line spacing\n --spacing: 2;\n\n // effects\n --rounded: 5px;\n --shadow: 0 0 10px 0 var(--overlay);\n}\n"],"file":"-theme.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/alert.css b/preview/pr-29/_styles/alert.css deleted file mode 100644 index a270c6f42e..0000000000 --- a/preview/pr-29/_styles/alert.css +++ /dev/null @@ -1,36 +0,0 @@ -.alert { - position: relative; - display: flex; - gap: 20px; - align-items: center; - margin: 20px 0; - padding: 20px; - border-radius: var(--rounded); - overflow: hidden; - text-align: left; - line-height: var(--spacing); -} - -.alert:before { - content: ""; - position: absolute; - inset: 0; - opacity: 0.1; - background: var(--color); - z-index: -1; -} - -.alert > .icon { - color: var(--color); - font-size: var(--large); -} - -.alert-content > *:first-child { - margin-top: 0; -} - -.alert-content > *:last-child { - margin-bottom: 0; -} - -/*# sourceMappingURL=alert.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/alert.css.map b/preview/pr-29/_styles/alert.css.map deleted file mode 100644 index f34316bcab..0000000000 --- a/preview/pr-29/_styles/alert.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["alert.scss"],"names":[],"mappings":"AAAA;EACE;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;EACA;;;AAGF;EACE;;;AAGF;EACE","sourcesContent":[".alert {\n position: relative;\n display: flex;\n gap: 20px;\n align-items: center;\n margin: 20px 0;\n padding: 20px;\n border-radius: var(--rounded);\n overflow: hidden;\n text-align: left;\n line-height: var(--spacing);\n}\n\n.alert:before {\n content: \"\";\n position: absolute;\n inset: 0;\n opacity: 0.1;\n background: var(--color);\n z-index: -1;\n}\n\n.alert > .icon {\n color: var(--color);\n font-size: var(--large);\n}\n\n.alert-content > *:first-child {\n margin-top: 0;\n}\n\n.alert-content > *:last-child {\n margin-bottom: 0;\n}\n"],"file":"alert.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/all.css b/preview/pr-29/_styles/all.css deleted file mode 100644 index 6d9aef3532..0000000000 --- a/preview/pr-29/_styles/all.css +++ /dev/null @@ -1,7 +0,0 @@ -* { - box-sizing: border-box; - transition: none 0.2s; - -webkit-text-size-adjust: none; -} - -/*# sourceMappingURL=all.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/all.css.map b/preview/pr-29/_styles/all.css.map deleted file mode 100644 index 1c5453ecf1..0000000000 --- a/preview/pr-29/_styles/all.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["all.scss"],"names":[],"mappings":"AAAA;EACE;EACA;EACA","sourcesContent":["* {\n box-sizing: border-box;\n transition: none 0.2s;\n -webkit-text-size-adjust: none;\n}\n"],"file":"all.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/anchor.css b/preview/pr-29/_styles/anchor.css deleted file mode 100644 index a0c340257d..0000000000 --- a/preview/pr-29/_styles/anchor.css +++ /dev/null @@ -1,23 +0,0 @@ -.anchor { - display: inline-block; - position: relative; - width: 0; - margin: 0; - left: 0.5em; - color: var(--primary) !important; - opacity: 0; - font-size: 0.75em; - text-decoration: none; - transition-property: opacity, color; -} - -*:hover > .anchor, -.anchor:focus { - opacity: 1; -} - -.anchor:hover { - color: var(--text) !important; -} - -/*# sourceMappingURL=anchor.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/anchor.css.map b/preview/pr-29/_styles/anchor.css.map deleted file mode 100644 index 060a4538ae..0000000000 --- a/preview/pr-29/_styles/anchor.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["anchor.scss"],"names":[],"mappings":"AAAA;EACE;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;;;AAGF;AAAA;EAEE;;;AAGF;EACE","sourcesContent":[".anchor {\n display: inline-block;\n position: relative;\n width: 0;\n margin: 0;\n left: 0.5em;\n color: var(--primary) !important;\n opacity: 0;\n font-size: 0.75em;\n text-decoration: none;\n transition-property: opacity, color;\n}\n\n*:hover > .anchor,\n.anchor:focus {\n opacity: 1;\n}\n\n.anchor:hover {\n color: var(--text) !important;\n}\n"],"file":"anchor.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/background.css b/preview/pr-29/_styles/background.css deleted file mode 100644 index 025e56adf9..0000000000 --- a/preview/pr-29/_styles/background.css +++ /dev/null @@ -1,20 +0,0 @@ -.background { - position: relative; - background: var(--background); - color: var(--text); - z-index: 1; -} - -.background:before { - content: ""; - position: absolute; - inset: 0; - background-image: var(--image); - background-size: cover; - background-repeat: no-repeat; - background-position: 50% 50%; - opacity: 0.25; - z-index: -1; -} - -/*# sourceMappingURL=background.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/background.css.map b/preview/pr-29/_styles/background.css.map deleted file mode 100644 index b655d9e563..0000000000 --- a/preview/pr-29/_styles/background.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["background.scss"],"names":[],"mappings":"AAAA;EACE;EACA;EACA;EACA;;;AAGF;EACE;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA","sourcesContent":[".background {\n position: relative;\n background: var(--background);\n color: var(--text);\n z-index: 1;\n}\n\n.background:before {\n content: \"\";\n position: absolute;\n inset: 0;\n background-image: var(--image);\n background-size: cover;\n background-repeat: no-repeat;\n background-position: 50% 50%;\n opacity: 0.25;\n z-index: -1;\n}\n"],"file":"background.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/body.css b/preview/pr-29/_styles/body.css deleted file mode 100644 index 7287261238..0000000000 --- a/preview/pr-29/_styles/body.css +++ /dev/null @@ -1,17 +0,0 @@ -html, -body { - margin: 0; - padding: 0; - min-height: 100vh; - background: var(--background); - color: var(--text); - font-family: var(--body); -} - -body { - display: flex; - flex-direction: column; - text-align: center; -} - -/*# sourceMappingURL=body.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/body.css.map b/preview/pr-29/_styles/body.css.map deleted file mode 100644 index 5fc5586066..0000000000 --- a/preview/pr-29/_styles/body.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["body.scss"],"names":[],"mappings":"AAAA;AAAA;EAEE;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;EACA;EACA","sourcesContent":["html,\nbody {\n margin: 0;\n padding: 0;\n min-height: 100vh;\n background: var(--background);\n color: var(--text);\n font-family: var(--body);\n}\n\nbody {\n display: flex;\n flex-direction: column;\n text-align: center;\n}\n"],"file":"body.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/bold.css b/preview/pr-29/_styles/bold.css deleted file mode 100644 index 94a711f107..0000000000 --- a/preview/pr-29/_styles/bold.css +++ /dev/null @@ -1,6 +0,0 @@ -b, -strong { - font-weight: var(--bold); -} - -/*# sourceMappingURL=bold.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/bold.css.map b/preview/pr-29/_styles/bold.css.map deleted file mode 100644 index 57012fd4b5..0000000000 --- a/preview/pr-29/_styles/bold.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["bold.scss"],"names":[],"mappings":"AAAA;AAAA;EAEE","sourcesContent":["b,\nstrong {\n font-weight: var(--bold);\n}\n"],"file":"bold.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/button.css b/preview/pr-29/_styles/button.css deleted file mode 100644 index 505da8bd5f..0000000000 --- a/preview/pr-29/_styles/button.css +++ /dev/null @@ -1,50 +0,0 @@ -button { - cursor: pointer; -} - -.button-wrapper { - display: contents; -} - -.button { - display: inline-flex; - justify-content: center; - align-items: center; - gap: 10px; - max-width: calc(100% - 5px - 5px); - margin: 5px; - padding: 10px 15px; - border: none; - border-radius: var(--rounded); - background: var(--primary); - color: var(--background); - text-align: center; - font-family: var(--heading); - font-weight: var(--semi-bold); - line-height: 1; - text-decoration: none; - vertical-align: middle; - -webkit-appearance: none; - appearance: none; - transition-property: background, color; -} - -.button:hover { - background: var(--text); - color: var(--background); -} - -.button[data-style=bare] { - padding: 5px; - background: none; - color: var(--primary); -} -.button[data-style=bare]:hover { - color: var(--text); -} - -.button[data-flip] { - flex-direction: row-reverse; -} - -/*# sourceMappingURL=button.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/button.css.map b/preview/pr-29/_styles/button.css.map deleted file mode 100644 index 351a5ae814..0000000000 --- a/preview/pr-29/_styles/button.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["button.scss"],"names":[],"mappings":"AAAA;EACE;;;AAGF;EACE;;;AAGF;EACE;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;EACA;;;AAGF;EACE;EACA;EACA;;AAEA;EACE;;;AAIJ;EACE","sourcesContent":["button {\n cursor: pointer;\n}\n\n.button-wrapper {\n display: contents;\n}\n\n.button {\n display: inline-flex;\n justify-content: center;\n align-items: center;\n gap: 10px;\n max-width: calc(100% - 5px - 5px);\n margin: 5px;\n padding: 10px 15px;\n border: none;\n border-radius: var(--rounded);\n background: var(--primary);\n color: var(--background);\n text-align: center;\n font-family: var(--heading);\n font-weight: var(--semi-bold);\n line-height: 1;\n text-decoration: none;\n vertical-align: middle;\n -webkit-appearance: none;\n appearance: none;\n transition-property: background, color;\n}\n\n.button:hover {\n background: var(--text);\n color: var(--background);\n}\n\n.button[data-style=\"bare\"] {\n padding: 5px;\n background: none;\n color: var(--primary);\n\n &:hover {\n color: var(--text);\n }\n}\n\n.button[data-flip] {\n flex-direction: row-reverse;\n}\n"],"file":"button.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/card.css b/preview/pr-29/_styles/card.css deleted file mode 100644 index 2a70742542..0000000000 --- a/preview/pr-29/_styles/card.css +++ /dev/null @@ -1,49 +0,0 @@ -.card { - display: inline-flex; - justify-content: stretch; - align-items: center; - flex-direction: column; - width: 350px; - max-width: calc(100% - 20px - 20px); - margin: 20px; - background: var(--background); - border-radius: var(--rounded); - overflow: hidden; - box-shadow: var(--shadow); - vertical-align: top; -} - -.card[data-style=small] { - width: 250px; -} - -.card-image img { - aspect-ratio: 3/2; - object-fit: cover; -} - -.card-text { - display: inline-flex; - justify-content: flex-start; - align-items: center; - flex-direction: column; - gap: 20px; - padding: 20px; -} - -.card-text > *, -.card-text > .tags { - margin: 0; -} - -.card-title { - font-family: var(--heading); - font-weight: var(--semi-bold); -} - -.card-subtitle { - margin-top: -15px; - font-style: italic; -} - -/*# sourceMappingURL=card.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/card.css.map b/preview/pr-29/_styles/card.css.map deleted file mode 100644 index 1892c3e9bc..0000000000 --- a/preview/pr-29/_styles/card.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["card.scss"],"names":[],"mappings":"AAAA;EACE;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;;;AAGF;EACE;EACA;;;AAIF;EACE;EACA;EACA;EACA;EACA;EACA;;;AAGF;AAAA;EAEE;;;AAGF;EACE;EACA;;;AAGF;EACE;EACA","sourcesContent":[".card {\n display: inline-flex;\n justify-content: stretch;\n align-items: center;\n flex-direction: column;\n width: 350px;\n max-width: calc(100% - 20px - 20px);\n margin: 20px;\n background: var(--background);\n border-radius: var(--rounded);\n overflow: hidden;\n box-shadow: var(--shadow);\n vertical-align: top;\n}\n\n.card[data-style=\"small\"] {\n width: 250px;\n}\n\n.card-image img {\n aspect-ratio: 3 / 2;\n object-fit: cover;\n // box-shadow: var(--shadow);\n}\n\n.card-text {\n display: inline-flex;\n justify-content: flex-start;\n align-items: center;\n flex-direction: column;\n gap: 20px;\n padding: 20px;\n}\n\n.card-text > *,\n.card-text > .tags {\n margin: 0;\n}\n\n.card-title {\n font-family: var(--heading);\n font-weight: var(--semi-bold);\n}\n\n.card-subtitle {\n margin-top: -15px;\n font-style: italic;\n}\n"],"file":"card.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/checkbox.css b/preview/pr-29/_styles/checkbox.css deleted file mode 100644 index 8c77dc53e1..0000000000 --- a/preview/pr-29/_styles/checkbox.css +++ /dev/null @@ -1,5 +0,0 @@ -input[type=checkbox] { - cursor: pointer; -} - -/*# sourceMappingURL=checkbox.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/checkbox.css.map b/preview/pr-29/_styles/checkbox.css.map deleted file mode 100644 index 90fb493297..0000000000 --- a/preview/pr-29/_styles/checkbox.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["checkbox.scss"],"names":[],"mappings":"AAAA;EACE","sourcesContent":["input[type=\"checkbox\"] {\n cursor: pointer;\n}\n"],"file":"checkbox.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/citation.css b/preview/pr-29/_styles/citation.css deleted file mode 100644 index 58eba89ec9..0000000000 --- a/preview/pr-29/_styles/citation.css +++ /dev/null @@ -1,88 +0,0 @@ -.citation { - display: flex; - margin: 15px 0; - border-radius: var(--rounded); - background: var(--background); - overflow: hidden; - box-shadow: var(--shadow); -} - -.citation-image { - position: relative; - width: 180px; - flex-shrink: 0; -} - -.citation-image img { - position: absolute; - inset: 0; - width: 100%; - height: 100%; - object-fit: contain; -} - -.citation-text { - position: relative; - display: inline-flex; - flex-wrap: wrap; - gap: 15px; - height: min-content; - padding: 20px; - padding-left: 30px; - text-align: left; - z-index: 0; -} - -.citation-title, -.citation-authors, -.citation-details, -.citation-description { - width: 100%; - line-height: calc(var(--spacing) - 0.4); -} - -.citation-title { - font-weight: var(--semi-bold); -} - -.citation-text > .icon { - position: absolute; - top: 20px; - right: 20px; - color: var(--light-gray); - opacity: 0.5; - font-size: 30px; - z-index: -1; -} - -.citation-description { - color: var(--gray); -} - -.citation-buttons { - display: flex; - flex-wrap: wrap; - gap: 10px; -} - -.citation-buttons .button { - margin: 0; -} - -.citation-text > .tags { - display: inline-flex; - justify-content: flex-start; - margin: 0; -} - -@media (max-width: 800px) { - .citation { - flex-direction: column; - } - .citation-image { - width: unset; - height: 180px; - } -} - -/*# sourceMappingURL=citation.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/citation.css.map b/preview/pr-29/_styles/citation.css.map deleted file mode 100644 index 8ad71eef9d..0000000000 --- a/preview/pr-29/_styles/citation.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["citation.scss"],"names":[],"mappings":"AAGA;EACE;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;EACA,OAdW;EAeX;;;AAIF;EACE;EACA;EACA;EACA;EACA;;;AAGF;EACE;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;;;AAGF;AAAA;AAAA;AAAA;EAIE;EACA;;;AAGF;EACE;;;AAGF;EACE;EACA;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;;;AAGF;EACE;EACA;EACA;;;AAGF;EACE;;;AAGF;EACE;EACA;EACA;;;AAGF;EACE;IACE;;EAGF;IACE;IACA,QAxFS","sourcesContent":["$thumb-size: 180px;\n$wrap: 800px;\n\n.citation {\n display: flex;\n margin: 15px 0;\n border-radius: var(--rounded);\n background: var(--background);\n overflow: hidden;\n box-shadow: var(--shadow);\n}\n\n.citation-image {\n position: relative;\n width: $thumb-size;\n flex-shrink: 0;\n // box-shadow: var(--shadow);\n}\n\n.citation-image img {\n position: absolute;\n inset: 0;\n width: 100%;\n height: 100%;\n object-fit: contain;\n}\n\n.citation-text {\n position: relative;\n display: inline-flex;\n flex-wrap: wrap;\n gap: 15px;\n height: min-content;\n padding: 20px;\n padding-left: 30px;\n text-align: left;\n z-index: 0;\n}\n\n.citation-title,\n.citation-authors,\n.citation-details,\n.citation-description {\n width: 100%;\n line-height: calc(var(--spacing) - 0.4);\n}\n\n.citation-title {\n font-weight: var(--semi-bold);\n}\n\n.citation-text > .icon {\n position: absolute;\n top: 20px;\n right: 20px;\n color: var(--light-gray);\n opacity: 0.5;\n font-size: 30px;\n z-index: -1;\n}\n\n.citation-description {\n color: var(--gray);\n}\n\n.citation-buttons {\n display: flex;\n flex-wrap: wrap;\n gap: 10px;\n}\n\n.citation-buttons .button {\n margin: 0;\n}\n\n.citation-text > .tags {\n display: inline-flex;\n justify-content: flex-start;\n margin: 0;\n}\n\n@media (max-width: $wrap) {\n .citation {\n flex-direction: column;\n }\n\n .citation-image {\n width: unset;\n height: $thumb-size;\n }\n}\n"],"file":"citation.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/code.css b/preview/pr-29/_styles/code.css deleted file mode 100644 index a8077f4cf0..0000000000 --- a/preview/pr-29/_styles/code.css +++ /dev/null @@ -1,35 +0,0 @@ -pre, -code, -pre *, -code * { - font-family: var(--code); -} - -code.highlighter-rouge { - padding: 2px 6px; - background: var(--light-gray); - border-radius: var(--rounded); - line-height: calc(var(--spacing) - 0.2); -} - -div.highlighter-rouge { - width: 100%; - margin: 40px 0; - border-radius: var(--rounded); - overflow-x: auto; - overflow-y: auto; - text-align: left; - line-height: calc(var(--spacing) - 0.4); -} -div.highlighter-rouge div.highlight { - display: contents; -} -div.highlighter-rouge div.highlight pre.highlight { - width: fit-content; - min-width: 100%; - margin: 0; - padding: 20px; - color: var(--white); -} - -/*# sourceMappingURL=code.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/code.css.map b/preview/pr-29/_styles/code.css.map deleted file mode 100644 index 048eb76492..0000000000 --- a/preview/pr-29/_styles/code.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["code.scss"],"names":[],"mappings":"AAAA;AAAA;AAAA;AAAA;EAIE;;;AAIF;EACE;EACA;EACA;EACA;;;AAIF;EACE;EACA;EACA;EACA;EACA;EACA;EACA;;AAEA;EACE;;AAEA;EACE;EACA;EACA;EACA;EACA","sourcesContent":["pre,\ncode,\npre *,\ncode * {\n font-family: var(--code);\n}\n\n// inline code\ncode.highlighter-rouge {\n padding: 2px 6px;\n background: var(--light-gray);\n border-radius: var(--rounded);\n line-height: calc(var(--spacing) - 0.2);\n}\n\n// code block\ndiv.highlighter-rouge {\n width: 100%;\n margin: 40px 0;\n border-radius: var(--rounded);\n overflow-x: auto;\n overflow-y: auto;\n text-align: left;\n line-height: calc(var(--spacing) - 0.4);\n\n div.highlight {\n display: contents;\n\n pre.highlight {\n width: fit-content;\n min-width: 100%;\n margin: 0;\n padding: 20px;\n color: var(--white);\n }\n }\n}\n"],"file":"code.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/cols.css b/preview/pr-29/_styles/cols.css deleted file mode 100644 index 13dfb6e88d..0000000000 --- a/preview/pr-29/_styles/cols.css +++ /dev/null @@ -1,34 +0,0 @@ -.cols { - display: grid; - --repeat: min(3, var(--cols)); - grid-template-columns: repeat(var(--repeat), 1fr); - align-items: flex-start; - gap: 40px; - margin: 40px 0; -} - -.cols > * { - min-width: 0; - min-height: 0; -} - -.cols > div > *:first-child { - margin-top: 0 !important; -} - -.cols > div > *:last-child { - margin-bottom: 0 !important; -} - -@media (max-width: 750px) { - .cols { - --repeat: min(2, var(--cols)); - } -} -@media (max-width: 500px) { - .cols { - --repeat: min(1, var(--cols)); - } -} - -/*# sourceMappingURL=cols.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/cols.css.map b/preview/pr-29/_styles/cols.css.map deleted file mode 100644 index 488b82723c..0000000000 --- a/preview/pr-29/_styles/cols.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["cols.scss"],"names":[],"mappings":"AAGA;EACE;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;EACA;;;AAGF;EACE;;;AAGF;EACE;;;AAGF;EACE;IACE;;;AAIJ;EACE;IACE","sourcesContent":["$two: 750px;\n$one: 500px;\n\n.cols {\n display: grid;\n --repeat: min(3, var(--cols));\n grid-template-columns: repeat(var(--repeat), 1fr);\n align-items: flex-start;\n gap: 40px;\n margin: 40px 0;\n}\n\n.cols > * {\n min-width: 0;\n min-height: 0;\n}\n\n.cols > div > *:first-child {\n margin-top: 0 !important;\n}\n\n.cols > div > *:last-child {\n margin-bottom: 0 !important;\n}\n\n@media (max-width: $two) {\n .cols {\n --repeat: min(2, var(--cols));\n }\n}\n\n@media (max-width: $one) {\n .cols {\n --repeat: min(1, var(--cols));\n }\n}\n"],"file":"cols.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/dark-toggle.css b/preview/pr-29/_styles/dark-toggle.css deleted file mode 100644 index daedc5dbff..0000000000 --- a/preview/pr-29/_styles/dark-toggle.css +++ /dev/null @@ -1,31 +0,0 @@ -.dark-toggle { - position: relative; - width: 40px; - height: 25px; - margin: 0; - border-radius: 999px; - background: var(--primary); - -webkit-appearance: none; - appearance: none; - transition-property: background; -} - -.dark-toggle:after { - content: "\f185"; - position: absolute; - left: 12px; - top: 50%; - color: var(--text); - font-size: 15px; - font-family: "Font Awesome 6 Free"; - font-weight: 900; - transform: translate(-50%, -50%); - transition: left 0.2s; -} - -.dark-toggle:checked:after { - content: "\f186"; - left: calc(100% - 12px); -} - -/*# sourceMappingURL=dark-toggle.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/dark-toggle.css.map b/preview/pr-29/_styles/dark-toggle.css.map deleted file mode 100644 index 88294f04d2..0000000000 --- a/preview/pr-29/_styles/dark-toggle.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["dark-toggle.scss"],"names":[],"mappings":"AAAA;EACE;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;EACA","sourcesContent":[".dark-toggle {\n position: relative;\n width: 40px;\n height: 25px;\n margin: 0;\n border-radius: 999px;\n background: var(--primary);\n -webkit-appearance: none;\n appearance: none;\n transition-property: background;\n}\n\n.dark-toggle:after {\n content: \"\\f185\";\n position: absolute;\n left: 12px;\n top: 50%;\n color: var(--text);\n font-size: 15px;\n font-family: \"Font Awesome 6 Free\";\n font-weight: 900;\n transform: translate(-50%, -50%);\n transition: left 0.2s;\n}\n\n.dark-toggle:checked:after {\n content: \"\\f186\";\n left: calc(100% - 12px);\n}\n"],"file":"dark-toggle.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/feature.css b/preview/pr-29/_styles/feature.css deleted file mode 100644 index a2d72f6b6d..0000000000 --- a/preview/pr-29/_styles/feature.css +++ /dev/null @@ -1,49 +0,0 @@ -.feature { - display: flex; - justify-content: center; - align-items: center; - gap: 40px; - margin: 40px 0; -} - -.feature-image { - flex-shrink: 0; - width: 40%; - aspect-ratio: 3/2; - border-radius: var(--rounded); - overflow: hidden; - box-shadow: var(--shadow); -} - -.feature-image img { - width: 100%; - height: 100%; - object-fit: cover; -} - -.feature-text { - flex-grow: 1; -} - -.feature-title { - font-size: var(--large); - text-align: center; - font-family: var(--heading); - font-weight: var(--semi-bold); -} - -.feature[data-flip] { - flex-direction: row-reverse; -} - -@media (max-width: 800px) { - .feature { - flex-direction: column !important; - } - .feature-image { - width: unset; - max-width: 400px; - } -} - -/*# sourceMappingURL=feature.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/feature.css.map b/preview/pr-29/_styles/feature.css.map deleted file mode 100644 index 60e3d5323a..0000000000 --- a/preview/pr-29/_styles/feature.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["feature.scss"],"names":[],"mappings":"AAEA;EACE;EACA;EACA;EACA;EACA;;;AAGF;EACE;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;EACA;EACA;;;AAGF;EACE;;;AAGF;EACE;EACA;EACA;EACA;;;AAGF;EACE;;;AAGF;EACE;IACE;;EAGF;IACE;IACA","sourcesContent":["$wrap: 800px;\n\n.feature {\n display: flex;\n justify-content: center;\n align-items: center;\n gap: 40px;\n margin: 40px 0;\n}\n\n.feature-image {\n flex-shrink: 0;\n width: 40%;\n aspect-ratio: 3 / 2;\n border-radius: var(--rounded);\n overflow: hidden;\n box-shadow: var(--shadow);\n}\n\n.feature-image img {\n width: 100%;\n height: 100%;\n object-fit: cover;\n}\n\n.feature-text {\n flex-grow: 1;\n}\n\n.feature-title {\n font-size: var(--large);\n text-align: center;\n font-family: var(--heading);\n font-weight: var(--semi-bold);\n}\n\n.feature[data-flip] {\n flex-direction: row-reverse;\n}\n\n@media (max-width: $wrap) {\n .feature {\n flex-direction: column !important;\n }\n\n .feature-image {\n width: unset;\n max-width: calc($wrap / 2);\n }\n}\n"],"file":"feature.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/figure.css b/preview/pr-29/_styles/figure.css deleted file mode 100644 index 95589387ff..0000000000 --- a/preview/pr-29/_styles/figure.css +++ /dev/null @@ -1,25 +0,0 @@ -.figure { - display: flex; - justify-content: center; - align-items: center; - flex-direction: column; - gap: 10px; - margin: 40px 0; -} - -.figure-image { - display: contents; -} - -.figure-image img { - border-radius: var(--rounded); - overflow: hidden; - box-shadow: var(--shadow); -} - -.figure-caption { - font-style: italic; - text-align: center; -} - -/*# sourceMappingURL=figure.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/figure.css.map b/preview/pr-29/_styles/figure.css.map deleted file mode 100644 index 4d62fcf185..0000000000 --- a/preview/pr-29/_styles/figure.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["figure.scss"],"names":[],"mappings":"AAAA;EACE;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;;;AAGF;EACE;EACA;EACA;;;AAGF;EACE;EACA","sourcesContent":[".figure {\n display: flex;\n justify-content: center;\n align-items: center;\n flex-direction: column;\n gap: 10px;\n margin: 40px 0;\n}\n\n.figure-image {\n display: contents;\n}\n\n.figure-image img {\n border-radius: var(--rounded);\n overflow: hidden;\n box-shadow: var(--shadow);\n}\n\n.figure-caption {\n font-style: italic;\n text-align: center;\n}\n"],"file":"figure.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/float.css b/preview/pr-29/_styles/float.css deleted file mode 100644 index d546e312cb..0000000000 --- a/preview/pr-29/_styles/float.css +++ /dev/null @@ -1,34 +0,0 @@ -.float { - margin-bottom: 20px; - max-width: 50%; -} - -.float > * { - margin: 0 !important; -} - -.float:not([data-flip]) { - float: left; - margin-right: 40px; -} - -.float[data-flip] { - float: right; - margin-left: 40px; -} - -.float[data-clear] { - float: unset; - clear: both; - margin: 0; -} - -@media (max-width: 600px) { - .float { - float: unset !important; - clear: both !important; - margin: auto !important; - } -} - -/*# sourceMappingURL=float.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/float.css.map b/preview/pr-29/_styles/float.css.map deleted file mode 100644 index 863910770d..0000000000 --- a/preview/pr-29/_styles/float.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["float.scss"],"names":[],"mappings":"AAEA;EACE;EACA;;;AAGF;EACE;;;AAGF;EACE;EACA;;;AAGF;EACE;EACA;;;AAGF;EACE;EACA;EACA;;;AAGF;EACE;IACE;IACA;IACA","sourcesContent":["$wrap: 600px;\n\n.float {\n margin-bottom: 20px;\n max-width: 50%;\n}\n\n.float > * {\n margin: 0 !important;\n}\n\n.float:not([data-flip]) {\n float: left;\n margin-right: 40px;\n}\n\n.float[data-flip] {\n float: right;\n margin-left: 40px;\n}\n\n.float[data-clear] {\n float: unset;\n clear: both;\n margin: 0;\n}\n\n@media (max-width: $wrap) {\n .float {\n float: unset !important;\n clear: both !important;\n margin: auto !important;\n }\n}\n"],"file":"float.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/font.css b/preview/pr-29/_styles/font.css deleted file mode 100644 index c40e155902..0000000000 --- a/preview/pr-29/_styles/font.css +++ /dev/null @@ -1,3 +0,0 @@ -@font-face {} - -/*# sourceMappingURL=font.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/font.css.map b/preview/pr-29/_styles/font.css.map deleted file mode 100644 index e1d56c0444..0000000000 --- a/preview/pr-29/_styles/font.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["font.scss"],"names":[],"mappings":"AAAA","sourcesContent":["@font-face {\n}\n"],"file":"font.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/footer.css b/preview/pr-29/_styles/footer.css deleted file mode 100644 index a85b907fee..0000000000 --- a/preview/pr-29/_styles/footer.css +++ /dev/null @@ -1,24 +0,0 @@ -footer { - display: flex; - justify-content: center; - align-items: center; - flex-direction: column; - gap: 20px; - padding: 40px; - line-height: var(--spacing); - box-shadow: var(--shadow); -} - -footer a { - color: var(--text) !important; -} - -footer a:hover { - color: var(--primary) !important; -} - -footer .icon { - font-size: var(--xl); -} - -/*# sourceMappingURL=footer.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/footer.css.map b/preview/pr-29/_styles/footer.css.map deleted file mode 100644 index 61ae1179a5..0000000000 --- a/preview/pr-29/_styles/footer.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["footer.scss"],"names":[],"mappings":"AAAA;EACE;EACA;EACA;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;;;AAGF;EACE;;;AAGF;EACE","sourcesContent":["footer {\n display: flex;\n justify-content: center;\n align-items: center;\n flex-direction: column;\n gap: 20px;\n padding: 40px;\n line-height: var(--spacing);\n box-shadow: var(--shadow);\n}\n\nfooter a {\n color: var(--text) !important;\n}\n\nfooter a:hover {\n color: var(--primary) !important;\n}\n\nfooter .icon {\n font-size: var(--xl);\n}\n"],"file":"footer.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/form.css b/preview/pr-29/_styles/form.css deleted file mode 100644 index 761145950c..0000000000 --- a/preview/pr-29/_styles/form.css +++ /dev/null @@ -1,8 +0,0 @@ -form { - display: flex; - justify-content: center; - align-items: center; - gap: 10px; -} - -/*# sourceMappingURL=form.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/form.css.map b/preview/pr-29/_styles/form.css.map deleted file mode 100644 index 65939cb61c..0000000000 --- a/preview/pr-29/_styles/form.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["form.scss"],"names":[],"mappings":"AAAA;EACE;EACA;EACA;EACA","sourcesContent":["form {\n display: flex;\n justify-content: center;\n align-items: center;\n gap: 10px;\n}\n"],"file":"form.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/grid.css b/preview/pr-29/_styles/grid.css deleted file mode 100644 index 3931eb21de..0000000000 --- a/preview/pr-29/_styles/grid.css +++ /dev/null @@ -1,45 +0,0 @@ -.grid { - display: grid; - --repeat: 3; - grid-template-columns: repeat(var(--repeat), 1fr); - justify-content: center; - align-items: flex-start; - gap: 40px; - margin: 40px 0; -} - -.grid > * { - min-width: 0; - min-height: 0; - width: 100%; - margin: 0 !important; -} - -@media (max-width: 750px) { - .grid { - --repeat: 2; - } -} -@media (max-width: 500px) { - .grid { - --repeat: 1; - } -} -.grid[data-style=square] { - align-items: center; -} -.grid[data-style=square] > * { - aspect-ratio: 1/1; -} -.grid[data-style=square] img { - aspect-ratio: 1/1; - object-fit: cover; - max-width: unset; - max-height: unset; -} - -.grid > *:where(h1, h2, h3, h4) { - display: none; -} - -/*# sourceMappingURL=grid.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/grid.css.map b/preview/pr-29/_styles/grid.css.map deleted file mode 100644 index 7baeedc0d1..0000000000 --- a/preview/pr-29/_styles/grid.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["grid.scss"],"names":[],"mappings":"AAGA;EACE;EACA;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;EACA;EACA;EAEA;;;AAGF;EACE;IACE;;;AAIJ;EACE;IACE;;;AAIJ;EACE;;AAEA;EACE;;AAGF;EACE;EACA;EACA;EACA;;;AAIJ;EACE","sourcesContent":["$two: 750px;\n$one: 500px;\n\n.grid {\n display: grid;\n --repeat: 3;\n grid-template-columns: repeat(var(--repeat), 1fr);\n justify-content: center;\n align-items: flex-start;\n gap: 40px;\n margin: 40px 0;\n}\n\n.grid > * {\n min-width: 0;\n min-height: 0;\n width: 100%;\n // max-height: 50vh;\n margin: 0 !important;\n}\n\n@media (max-width: $two) {\n .grid {\n --repeat: 2;\n }\n}\n\n@media (max-width: $one) {\n .grid {\n --repeat: 1;\n }\n}\n\n.grid[data-style=\"square\"] {\n align-items: center;\n\n & > * {\n aspect-ratio: 1 / 1;\n }\n\n & img {\n aspect-ratio: 1 / 1;\n object-fit: cover;\n max-width: unset;\n max-height: unset;\n }\n}\n\n.grid > *:where(h1, h2, h3, h4) {\n display: none;\n}\n"],"file":"grid.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/header.css b/preview/pr-29/_styles/header.css deleted file mode 100644 index a9676b7e5b..0000000000 --- a/preview/pr-29/_styles/header.css +++ /dev/null @@ -1,146 +0,0 @@ -header { - display: flex; - justify-content: space-between; - align-items: center; - flex-wrap: wrap; - gap: 20px; - padding: 20px; - box-shadow: var(--shadow); - position: sticky !important; - top: 0; - z-index: 10 !important; -} - -header a { - color: var(--text); - text-decoration: none; -} - -.home { - display: flex; - justify-content: flex-start; - align-items: center; - gap: 10px; - flex-basis: 0; - flex-grow: 1; - max-width: 100%; -} - -.logo { - height: 30px; -} - -.logo > * { - height: 100%; -} - -.title { - display: flex; - justify-content: flex-start; - align-items: baseline; - flex-wrap: wrap; - gap: 5px; - min-width: 0; - font-family: var(--title); - text-align: left; -} - -.title > *:first-child { - font-size: var(--large); -} - -.title > *:last-child { - opacity: 0.65; - font-weight: var(--thin); -} - -.nav-toggle { - display: none; - position: relative; - width: 30px; - height: 30px; - margin: 0; - color: var(--text); - -webkit-appearance: none; - appearance: none; - transition-property: background; -} - -.nav-toggle:after { - content: "\f0c9"; - position: absolute; - left: 50%; - top: 50%; - color: var(--text); - font-size: 15px; - font-family: "Font Awesome 6 Free"; - font-weight: 900; - transform: translate(-50%, -50%); -} - -.nav-toggle:checked:after { - content: "\f00d"; -} - -nav { - display: flex; - justify-content: center; - align-items: center; - flex-wrap: wrap; - gap: 10px; - font-family: var(--heading); - text-transform: uppercase; -} - -nav > a { - padding: 5px; -} - -nav > a:hover { - color: var(--primary); -} - -@media (max-width: 700px) { - header:not([data-big]) { - justify-content: flex-end; - } - header:not([data-big]) .nav-toggle { - display: flex; - } - header:not([data-big]) .nav-toggle:not(:checked) + nav { - display: none; - } - header:not([data-big]) nav { - align-items: flex-end; - flex-direction: column; - width: 100%; - } -} - -header[data-big] { - justify-content: center; - align-items: center; - flex-direction: column; - padding: 100px 20px; - top: unset; -} -header[data-big] .home { - flex-direction: column; - flex-grow: 0; -} -header[data-big] .logo { - height: 70px; -} -header[data-big] .title { - flex-direction: column; - align-items: center; - text-align: center; -} -header[data-big] .title > *:first-child { - font-size: var(--xxl); -} -header[data-big] .title > *:last-child { - font-size: var(--large); -} - -/*# sourceMappingURL=header.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/header.css.map b/preview/pr-29/_styles/header.css.map deleted file mode 100644 index 063b3470b7..0000000000 --- a/preview/pr-29/_styles/header.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["header.scss"],"names":[],"mappings":"AAMA;EACE;EACA;EACA;EACA;EACA;EACA;EACA;EAGE;EACA;EACA;;;AAIJ;EACE;EACA;;;AAGF;EACE;EACA;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE,QArCK;;;AAwCP;EACE;;;AAGF;EACE;EACA;EACA;EACA;EACA;EACA;EACA;EACA;;;AAIF;EACE;;;AAIF;EACE;EACA;;;AAGF;EACE;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;;;AAGF;EACE;EACA;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;;;AAGF;EACE;;;AAIA;EADF;IAEI;;EAEA;IACE;;EAGF;IACE;;EAGF;IACE;IACA;IACA;;;;AAKN;EACE;EACA;EACA;EACA;EAGE;;AAGF;EACE;EACA;;AAGF;EACE,QArJO;;AAwJT;EACE;EACA;EACA;;AAGF;EACE;;AAGF;EACE","sourcesContent":["$logo-big: 70px;\n$logo: 30px;\n$big-padding: 100px;\n$collapse: 700px;\n$sticky: true;\n\nheader {\n display: flex;\n justify-content: space-between;\n align-items: center;\n flex-wrap: wrap;\n gap: 20px;\n padding: 20px;\n box-shadow: var(--shadow);\n\n @if $sticky {\n position: sticky !important;\n top: 0;\n z-index: 10 !important;\n }\n}\n\nheader a {\n color: var(--text);\n text-decoration: none;\n}\n\n.home {\n display: flex;\n justify-content: flex-start;\n align-items: center;\n gap: 10px;\n flex-basis: 0;\n flex-grow: 1;\n max-width: 100%;\n}\n\n.logo {\n height: $logo;\n}\n\n.logo > * {\n height: 100%;\n}\n\n.title {\n display: flex;\n justify-content: flex-start;\n align-items: baseline;\n flex-wrap: wrap;\n gap: 5px;\n min-width: 0;\n font-family: var(--title);\n text-align: left;\n}\n\n// main title\n.title > *:first-child {\n font-size: var(--large);\n}\n\n// subtitle\n.title > *:last-child {\n opacity: 0.65;\n font-weight: var(--thin);\n}\n\n.nav-toggle {\n display: none;\n position: relative;\n width: 30px;\n height: 30px;\n margin: 0;\n color: var(--text);\n -webkit-appearance: none;\n appearance: none;\n transition-property: background;\n}\n\n.nav-toggle:after {\n content: \"\\f0c9\";\n position: absolute;\n left: 50%;\n top: 50%;\n color: var(--text);\n font-size: 15px;\n font-family: \"Font Awesome 6 Free\";\n font-weight: 900;\n transform: translate(-50%, -50%);\n}\n\n.nav-toggle:checked:after {\n content: \"\\f00d\";\n}\n\nnav {\n display: flex;\n justify-content: center;\n align-items: center;\n flex-wrap: wrap;\n gap: 10px;\n font-family: var(--heading);\n text-transform: uppercase;\n}\n\nnav > a {\n padding: 5px;\n}\n\nnav > a:hover {\n color: var(--primary);\n}\n\nheader:not([data-big]) {\n @media (max-width: $collapse) {\n justify-content: flex-end;\n\n .nav-toggle {\n display: flex;\n }\n\n .nav-toggle:not(:checked) + nav {\n display: none;\n }\n\n nav {\n align-items: flex-end;\n flex-direction: column;\n width: 100%;\n }\n }\n}\n\nheader[data-big] {\n justify-content: center;\n align-items: center;\n flex-direction: column;\n padding: $big-padding 20px;\n\n @if $sticky {\n top: unset;\n }\n\n .home {\n flex-direction: column;\n flex-grow: 0;\n }\n\n .logo {\n height: $logo-big;\n }\n\n .title {\n flex-direction: column;\n align-items: center;\n text-align: center;\n }\n\n .title > *:first-child {\n font-size: var(--xxl);\n }\n\n .title > *:last-child {\n font-size: var(--large);\n }\n}\n"],"file":"header.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/heading.css b/preview/pr-29/_styles/heading.css deleted file mode 100644 index 3aa63ac6a2..0000000000 --- a/preview/pr-29/_styles/heading.css +++ /dev/null @@ -1,51 +0,0 @@ -h1, -h2, -h3, -h4, -h5, -h6 { - font-family: var(--heading); - line-height: calc(var(--spacing) - 0.2); -} - -h1 { - margin: 40px 0 20px 0; - font-size: var(--xxl); - font-weight: var(--regular); - letter-spacing: 1px; - text-transform: uppercase; - text-align: left; -} - -h2 { - margin: 40px 0 20px 0; - padding-bottom: 5px; - border-bottom: solid 1px var(--light-gray); - font-size: var(--xl); - font-weight: var(--regular); - letter-spacing: 1px; - text-align: left; -} - -h3 { - margin: 40px 0 20px 0; - font-size: var(--large); - font-weight: var(--semi-bold); - text-align: left; -} - -h4, -h5, -h6 { - margin: 40px 0 20px 0; - font-size: var(--medium); - font-weight: var(--semi-bold); - text-align: left; -} - -:where(h1, h2, h3, h4, h5, h6) > .icon { - margin-right: 1em; - color: var(--light-gray); -} - -/*# sourceMappingURL=heading.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/heading.css.map b/preview/pr-29/_styles/heading.css.map deleted file mode 100644 index 5a3ce6adf6..0000000000 --- a/preview/pr-29/_styles/heading.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["heading.scss"],"names":[],"mappings":"AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;EAME;EACA;;;AAGF;EACE;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;EACA;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;EACA;EACA;EACA;;;AAGF;AAAA;AAAA;EAGE;EACA;EACA;EACA;;;AAGF;EACE;EACA","sourcesContent":["h1,\nh2,\nh3,\nh4,\nh5,\nh6 {\n font-family: var(--heading);\n line-height: calc(var(--spacing) - 0.2);\n}\n\nh1 {\n margin: 40px 0 20px 0;\n font-size: var(--xxl);\n font-weight: var(--regular);\n letter-spacing: 1px;\n text-transform: uppercase;\n text-align: left;\n}\n\nh2 {\n margin: 40px 0 20px 0;\n padding-bottom: 5px;\n border-bottom: solid 1px var(--light-gray);\n font-size: var(--xl);\n font-weight: var(--regular);\n letter-spacing: 1px;\n text-align: left;\n}\n\nh3 {\n margin: 40px 0 20px 0;\n font-size: var(--large);\n font-weight: var(--semi-bold);\n text-align: left;\n}\n\nh4,\nh5,\nh6 {\n margin: 40px 0 20px 0;\n font-size: var(--medium);\n font-weight: var(--semi-bold);\n text-align: left;\n}\n\n:where(h1, h2, h3, h4, h5, h6) > .icon {\n margin-right: 1em;\n color: var(--light-gray);\n}\n"],"file":"heading.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/highlight.css b/preview/pr-29/_styles/highlight.css deleted file mode 100644 index a8cf7d3cee..0000000000 --- a/preview/pr-29/_styles/highlight.css +++ /dev/null @@ -1,6 +0,0 @@ -mark { - background: #fef08a; - color: #000000; -} - -/*# sourceMappingURL=highlight.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/highlight.css.map b/preview/pr-29/_styles/highlight.css.map deleted file mode 100644 index 957ceb13db..0000000000 --- a/preview/pr-29/_styles/highlight.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["highlight.scss"],"names":[],"mappings":"AAAA;EACE;EACA","sourcesContent":["mark {\n background: #fef08a;\n color: #000000;\n}\n"],"file":"highlight.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/icon.css b/preview/pr-29/_styles/icon.css deleted file mode 100644 index ab61327d04..0000000000 --- a/preview/pr-29/_styles/icon.css +++ /dev/null @@ -1,15 +0,0 @@ -.icon { - font-size: 1em; -} - -span.icon { - line-height: 1; -} - -span.icon > svg { - position: relative; - top: 0.1em; - height: 1em; -} - -/*# sourceMappingURL=icon.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/icon.css.map b/preview/pr-29/_styles/icon.css.map deleted file mode 100644 index 22298685e4..0000000000 --- a/preview/pr-29/_styles/icon.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["icon.scss"],"names":[],"mappings":"AAAA;EACE;;;AAGF;EACE;;;AAGF;EACE;EACA;EACA","sourcesContent":[".icon {\n font-size: 1em;\n}\n\nspan.icon {\n line-height: 1;\n}\n\nspan.icon > svg {\n position: relative;\n top: 0.1em;\n height: 1em;\n}\n"],"file":"icon.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/image.css b/preview/pr-29/_styles/image.css deleted file mode 100644 index 70340d334d..0000000000 --- a/preview/pr-29/_styles/image.css +++ /dev/null @@ -1,6 +0,0 @@ -img { - max-width: 100%; - max-height: 100%; -} - -/*# sourceMappingURL=image.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/image.css.map b/preview/pr-29/_styles/image.css.map deleted file mode 100644 index e88ec450d0..0000000000 --- a/preview/pr-29/_styles/image.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["image.scss"],"names":[],"mappings":"AAAA;EACE;EACA","sourcesContent":["img {\n max-width: 100%;\n max-height: 100%;\n}\n"],"file":"image.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/link.css b/preview/pr-29/_styles/link.css deleted file mode 100644 index a20e40bcfb..0000000000 --- a/preview/pr-29/_styles/link.css +++ /dev/null @@ -1,15 +0,0 @@ -a { - color: var(--primary); - transition-property: color; - overflow-wrap: break-word; -} - -a:hover { - color: var(--text); -} - -a:not([href]) { - color: var(--text); -} - -/*# sourceMappingURL=link.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/link.css.map b/preview/pr-29/_styles/link.css.map deleted file mode 100644 index 976b37f242..0000000000 --- a/preview/pr-29/_styles/link.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["link.scss"],"names":[],"mappings":"AAAA;EACE;EACA;EACA;;;AAGF;EACE;;;AAGF;EACE","sourcesContent":["a {\n color: var(--primary);\n transition-property: color;\n overflow-wrap: break-word;\n}\n\na:hover {\n color: var(--text);\n}\n\na:not([href]) {\n color: var(--text);\n}\n"],"file":"link.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/list.css b/preview/pr-29/_styles/list.css deleted file mode 100644 index 02a7cf164f..0000000000 --- a/preview/pr-29/_styles/list.css +++ /dev/null @@ -1,18 +0,0 @@ -ul, -ol { - margin: 20px 0; - padding-left: 40px; -} - -ul { - list-style-type: square; -} - -li { - margin: 5px 0; - padding-left: 10px; - text-align: justify; - line-height: var(--spacing); -} - -/*# sourceMappingURL=list.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/list.css.map b/preview/pr-29/_styles/list.css.map deleted file mode 100644 index 38fb1e506b..0000000000 --- a/preview/pr-29/_styles/list.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["list.scss"],"names":[],"mappings":"AAAA;AAAA;EAEE;EACA;;;AAGF;EACE;;;AAGF;EACE;EACA;EACA;EACA","sourcesContent":["ul,\nol {\n margin: 20px 0;\n padding-left: 40px;\n}\n\nul {\n list-style-type: square;\n}\n\nli {\n margin: 5px 0;\n padding-left: 10px;\n text-align: justify;\n line-height: var(--spacing);\n}\n"],"file":"list.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/main.css b/preview/pr-29/_styles/main.css deleted file mode 100644 index f72eb0d37e..0000000000 --- a/preview/pr-29/_styles/main.css +++ /dev/null @@ -1,7 +0,0 @@ -main { - display: flex; - flex-direction: column; - flex-grow: 1; -} - -/*# sourceMappingURL=main.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/main.css.map b/preview/pr-29/_styles/main.css.map deleted file mode 100644 index a2a0fa8dc5..0000000000 --- a/preview/pr-29/_styles/main.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["main.scss"],"names":[],"mappings":"AAAA;EACE;EACA;EACA","sourcesContent":["main {\n display: flex;\n flex-direction: column;\n flex-grow: 1;\n}\n"],"file":"main.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/paragraph.css b/preview/pr-29/_styles/paragraph.css deleted file mode 100644 index 7e46c39156..0000000000 --- a/preview/pr-29/_styles/paragraph.css +++ /dev/null @@ -1,7 +0,0 @@ -p { - margin: 20px 0; - text-align: justify; - line-height: var(--spacing); -} - -/*# sourceMappingURL=paragraph.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/paragraph.css.map b/preview/pr-29/_styles/paragraph.css.map deleted file mode 100644 index 7eb50a684e..0000000000 --- a/preview/pr-29/_styles/paragraph.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["paragraph.scss"],"names":[],"mappings":"AAAA;EACE;EACA;EACA","sourcesContent":["p {\n margin: 20px 0;\n text-align: justify;\n line-height: var(--spacing);\n}\n"],"file":"paragraph.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/portrait.css b/preview/pr-29/_styles/portrait.css deleted file mode 100644 index 65650a3a30..0000000000 --- a/preview/pr-29/_styles/portrait.css +++ /dev/null @@ -1,75 +0,0 @@ -.portrait-wrapper { - display: contents; -} - -.portrait { - position: relative; - display: inline-flex; - justify-content: center; - align-items: center; - flex-direction: column; - gap: 20px; - margin: 20px; - width: 175px; - max-width: calc(100% - 20px - 20px); - text-decoration: none; -} - -.portrait[data-style=small] { - width: 100px; -} - -.portrait[data-style=tiny] { - flex-direction: row; - gap: 15px; - width: unset; - text-align: left; -} - -.portrait-image { - width: 100%; - aspect-ratio: 1/1; - border-radius: 999px; - object-fit: cover; - box-shadow: var(--shadow); -} - -.portrait[data-style=tiny] .portrait-image { - width: 50px; -} - -.portrait[data-style=tiny] .portrait-role { - display: none; -} - -.portrait-text { - display: flex; - flex-direction: column; - line-height: calc(var(--spacing) - 0.4); -} - -.portrait-name { - font-family: var(--heading); - font-weight: var(--semi-bold); -} - -.portrait-role .icon { - position: absolute; - left: 8px; - top: 8px; - display: flex; - justify-content: center; - align-items: center; - width: 2em; - height: 2em; - border-radius: 999px; - background: var(--background); - box-shadow: var(--shadow); -} - -.portrait[data-style=small] .portrait-role .icon { - left: -2px; - top: -2px; -} - -/*# sourceMappingURL=portrait.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/portrait.css.map b/preview/pr-29/_styles/portrait.css.map deleted file mode 100644 index 1e0ff240e4..0000000000 --- a/preview/pr-29/_styles/portrait.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["portrait.scss"],"names":[],"mappings":"AAAA;EACE;;;AAGF;EACE;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;;;AAGF;EACE;EACA;EACA;EACA;;;AAGF;EACE;EACA;EACA;EACA;EACA;;;AAGF;EACE;;;AAGF;EACE;;;AAGF;EACE;EACA;EACA;;;AAGF;EACE;EACA;;;AAGF;EACE;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;EACA","sourcesContent":[".portrait-wrapper {\n display: contents;\n}\n\n.portrait {\n position: relative;\n display: inline-flex;\n justify-content: center;\n align-items: center;\n flex-direction: column;\n gap: 20px;\n margin: 20px;\n width: 175px;\n max-width: calc(100% - 20px - 20px);\n text-decoration: none;\n}\n\n.portrait[data-style=\"small\"] {\n width: 100px;\n}\n\n.portrait[data-style=\"tiny\"] {\n flex-direction: row;\n gap: 15px;\n width: unset;\n text-align: left;\n}\n\n.portrait-image {\n width: 100%;\n aspect-ratio: 1 / 1;\n border-radius: 999px;\n object-fit: cover;\n box-shadow: var(--shadow);\n}\n\n.portrait[data-style=\"tiny\"] .portrait-image {\n width: 50px;\n}\n\n.portrait[data-style=\"tiny\"] .portrait-role {\n display: none;\n}\n\n.portrait-text {\n display: flex;\n flex-direction: column;\n line-height: calc(var(--spacing) - 0.4);\n}\n\n.portrait-name {\n font-family: var(--heading);\n font-weight: var(--semi-bold);\n}\n\n.portrait-role .icon {\n position: absolute;\n left: 8px;\n top: 8px;\n display: flex;\n justify-content: center;\n align-items: center;\n width: 2em;\n height: 2em;\n border-radius: 999px;\n background: var(--background);\n box-shadow: var(--shadow);\n}\n\n.portrait[data-style=\"small\"] .portrait-role .icon {\n left: -2px;\n top: -2px;\n}\n"],"file":"portrait.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/post-excerpt.css b/preview/pr-29/_styles/post-excerpt.css deleted file mode 100644 index e9202036b7..0000000000 --- a/preview/pr-29/_styles/post-excerpt.css +++ /dev/null @@ -1,26 +0,0 @@ -.post-excerpt { - display: flex; - flex-wrap: wrap; - gap: 20px; - margin: 20px 0; - padding: 20px 30px; - border-radius: var(--rounded); - background: var(--background); - text-align: left; - box-shadow: var(--shadow); -} - -.post-excerpt > * { - margin: 0 !important; -} - -.post-excerpt > *:first-child { - font-weight: var(--semi-bold); - width: 100%; -} - -.post-excerpt > div { - justify-content: flex-start; -} - -/*# sourceMappingURL=post-excerpt.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/post-excerpt.css.map b/preview/pr-29/_styles/post-excerpt.css.map deleted file mode 100644 index d24db3532f..0000000000 --- a/preview/pr-29/_styles/post-excerpt.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["post-excerpt.scss"],"names":[],"mappings":"AAAA;EACE;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;;;AAGF;EACE;EACA;;;AAGF;EACE","sourcesContent":[".post-excerpt {\n display: flex;\n flex-wrap: wrap;\n gap: 20px;\n margin: 20px 0;\n padding: 20px 30px;\n border-radius: var(--rounded);\n background: var(--background);\n text-align: left;\n box-shadow: var(--shadow);\n}\n\n.post-excerpt > * {\n margin: 0 !important;\n}\n\n.post-excerpt > *:first-child {\n font-weight: var(--semi-bold);\n width: 100%;\n}\n\n.post-excerpt > div {\n justify-content: flex-start;\n}\n"],"file":"post-excerpt.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/post-info.css b/preview/pr-29/_styles/post-info.css deleted file mode 100644 index abb6b510d6..0000000000 --- a/preview/pr-29/_styles/post-info.css +++ /dev/null @@ -1,32 +0,0 @@ -.post-info { - display: flex; - justify-content: center; - align-items: center; - flex-wrap: wrap; - gap: 20px; - margin: 20px 0; - color: var(--gray); -} - -.post-info .portrait { - margin: 0; -} - -.post-info .icon { - margin-right: 0.5em; -} - -.post-info a { - color: inherit; -} - -.post-info a:hover { - color: var(--primary); -} - -.post-info > span { - text-align: center; - white-space: nowrap; -} - -/*# sourceMappingURL=post-info.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/post-info.css.map b/preview/pr-29/_styles/post-info.css.map deleted file mode 100644 index 74c149edb8..0000000000 --- a/preview/pr-29/_styles/post-info.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["post-info.scss"],"names":[],"mappings":"AAAA;EACE;EACA;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;;;AAGF;EACE;;;AAGF;EACE;;;AAGF;EACE;;;AAGF;EACE;EACA","sourcesContent":[".post-info {\n display: flex;\n justify-content: center;\n align-items: center;\n flex-wrap: wrap;\n gap: 20px;\n margin: 20px 0;\n color: var(--gray);\n}\n\n.post-info .portrait {\n margin: 0;\n}\n\n.post-info .icon {\n margin-right: 0.5em;\n}\n\n.post-info a {\n color: inherit;\n}\n\n.post-info a:hover {\n color: var(--primary);\n}\n\n.post-info > span {\n text-align: center;\n white-space: nowrap;\n}\n"],"file":"post-info.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/post-nav.css b/preview/pr-29/_styles/post-nav.css deleted file mode 100644 index fe210bb576..0000000000 --- a/preview/pr-29/_styles/post-nav.css +++ /dev/null @@ -1,36 +0,0 @@ -.post-nav { - display: flex; - justify-content: space-between; - align-items: flex-start; - gap: 10px; - color: var(--gray); - line-height: calc(var(--spacing) - 0.4); -} - -.post-nav > *:first-child { - text-align: left; -} - -.post-nav > *:last-child { - text-align: right; -} - -.post-nav > *:first-child .icon { - margin-right: 0.5em; -} - -.post-nav > *:last-child .icon { - margin-left: 0.5em; -} - -@media (max-width: 600px) { - .post-nav { - align-items: center; - flex-direction: column; - } - .post-nav > * { - text-align: center !important; - } -} - -/*# sourceMappingURL=post-nav.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/post-nav.css.map b/preview/pr-29/_styles/post-nav.css.map deleted file mode 100644 index 2ba6fba2d6..0000000000 --- a/preview/pr-29/_styles/post-nav.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["post-nav.scss"],"names":[],"mappings":"AAEA;EACE;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;;;AAGF;EACE;;;AAGF;EACE;;;AAGF;EACE;;;AAGF;EACE;IACE;IACA;;EAGF;IACE","sourcesContent":["$wrap: 600px;\n\n.post-nav {\n display: flex;\n justify-content: space-between;\n align-items: flex-start;\n gap: 10px;\n color: var(--gray);\n line-height: calc(var(--spacing) - 0.4);\n}\n\n.post-nav > *:first-child {\n text-align: left;\n}\n\n.post-nav > *:last-child {\n text-align: right;\n}\n\n.post-nav > *:first-child .icon {\n margin-right: 0.5em;\n}\n\n.post-nav > *:last-child .icon {\n margin-left: 0.5em;\n}\n\n@media (max-width: $wrap) {\n .post-nav {\n align-items: center;\n flex-direction: column;\n }\n\n .post-nav > * {\n text-align: center !important;\n }\n}\n"],"file":"post-nav.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/quote.css b/preview/pr-29/_styles/quote.css deleted file mode 100644 index 456c767fdf..0000000000 --- a/preview/pr-29/_styles/quote.css +++ /dev/null @@ -1,15 +0,0 @@ -blockquote { - margin: 20px 0; - padding: 10px 20px; - border-left: solid 4px var(--light-gray); -} - -blockquote > *:first-child { - margin-top: 0; -} - -blockquote > *:last-child { - margin-bottom: 0; -} - -/*# sourceMappingURL=quote.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/quote.css.map b/preview/pr-29/_styles/quote.css.map deleted file mode 100644 index 2cc84a2bca..0000000000 --- a/preview/pr-29/_styles/quote.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["quote.scss"],"names":[],"mappings":"AAAA;EACE;EACA;EACA;;;AAGF;EACE;;;AAGF;EACE","sourcesContent":["blockquote {\n margin: 20px 0;\n padding: 10px 20px;\n border-left: solid 4px var(--light-gray);\n}\n\nblockquote > *:first-child {\n margin-top: 0;\n}\n\nblockquote > *:last-child {\n margin-bottom: 0;\n}\n"],"file":"quote.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/rule.css b/preview/pr-29/_styles/rule.css deleted file mode 100644 index 28ca0809d9..0000000000 --- a/preview/pr-29/_styles/rule.css +++ /dev/null @@ -1,8 +0,0 @@ -hr { - margin: 40px 0; - background: var(--light-gray); - border: none; - height: 1px; -} - -/*# sourceMappingURL=rule.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/rule.css.map b/preview/pr-29/_styles/rule.css.map deleted file mode 100644 index a955dd9fee..0000000000 --- a/preview/pr-29/_styles/rule.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["rule.scss"],"names":[],"mappings":"AAAA;EACE;EACA;EACA;EACA","sourcesContent":["hr {\n margin: 40px 0;\n background: var(--light-gray);\n border: none;\n height: 1px;\n}\n"],"file":"rule.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/search-box.css b/preview/pr-29/_styles/search-box.css deleted file mode 100644 index 9766e9242f..0000000000 --- a/preview/pr-29/_styles/search-box.css +++ /dev/null @@ -1,25 +0,0 @@ -.search-box { - position: relative; - height: 40px; -} - -.search-box .search-input { - width: 100%; - height: 100%; - padding-right: 40px; -} - -.search-box button { - position: absolute; - inset: 0 0 0 auto; - display: flex; - justify-content: center; - align-items: center; - padding: 0; - aspect-ratio: 1/1; - background: none; - color: var(--black); - border: none; -} - -/*# sourceMappingURL=search-box.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/search-box.css.map b/preview/pr-29/_styles/search-box.css.map deleted file mode 100644 index 7d45274378..0000000000 --- a/preview/pr-29/_styles/search-box.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["search-box.scss"],"names":[],"mappings":"AAAA;EACE;EACA;;;AAGF;EACE;EACA;EACA;;;AAGF;EACE;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA","sourcesContent":[".search-box {\n position: relative;\n height: 40px;\n}\n\n.search-box .search-input {\n width: 100%;\n height: 100%;\n padding-right: 40px;\n}\n\n.search-box button {\n position: absolute;\n inset: 0 0 0 auto;\n display: flex;\n justify-content: center;\n align-items: center;\n padding: 0;\n aspect-ratio: 1 / 1;\n background: none;\n color: var(--black);\n border: none;\n}\n"],"file":"search-box.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/search-info.css b/preview/pr-29/_styles/search-info.css deleted file mode 100644 index e5c9a3050e..0000000000 --- a/preview/pr-29/_styles/search-info.css +++ /dev/null @@ -1,8 +0,0 @@ -.search-info { - margin: 20px 0; - text-align: center; - font-style: italic; - line-height: var(--spacing); -} - -/*# sourceMappingURL=search-info.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/search-info.css.map b/preview/pr-29/_styles/search-info.css.map deleted file mode 100644 index d825cee0b8..0000000000 --- a/preview/pr-29/_styles/search-info.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["search-info.scss"],"names":[],"mappings":"AAAA;EACE;EACA;EACA;EACA","sourcesContent":[".search-info {\n margin: 20px 0;\n text-align: center;\n font-style: italic;\n line-height: var(--spacing);\n}\n"],"file":"search-info.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/section.css b/preview/pr-29/_styles/section.css deleted file mode 100644 index 995ddcf915..0000000000 --- a/preview/pr-29/_styles/section.css +++ /dev/null @@ -1,35 +0,0 @@ -section { - padding: 40px max(40px, (100% - 1200px) / 2); - transition-property: background, color; -} - -section[data-size=wide] { - padding: 40px; -} - -section[data-size=full] { - padding: 0; -} - -section[data-size=full] > * { - margin: 0; - border-radius: 0; -} - -section[data-size=full] img { - border-radius: 0; -} - -main > section:last-of-type { - flex-grow: 1; -} - -main > section:nth-of-type(odd) { - background: var(--background); -} - -main > section:nth-of-type(even) { - background: var(--background-alt); -} - -/*# sourceMappingURL=section.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/section.css.map b/preview/pr-29/_styles/section.css.map deleted file mode 100644 index 10c01e73ed..0000000000 --- a/preview/pr-29/_styles/section.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["section.scss"],"names":[],"mappings":"AAGA;EACE;EACA;;;AAGF;EACE,SARQ;;;AAWV;EACE;;;AAGF;EACE;EACA;;;AAGF;EACE;;;AAGF;EACE;;;AAGF;EACE;;;AAGF;EACE","sourcesContent":["$page: 1200px;\n$padding: 40px;\n\nsection {\n padding: $padding max($padding, calc((100% - $page) / 2));\n transition-property: background, color;\n}\n\nsection[data-size=\"wide\"] {\n padding: $padding;\n}\n\nsection[data-size=\"full\"] {\n padding: 0;\n}\n\nsection[data-size=\"full\"] > * {\n margin: 0;\n border-radius: 0;\n}\n\nsection[data-size=\"full\"] img {\n border-radius: 0;\n}\n\nmain > section:last-of-type {\n flex-grow: 1;\n}\n\nmain > section:nth-of-type(odd) {\n background: var(--background);\n}\n\nmain > section:nth-of-type(even) {\n background: var(--background-alt);\n}\n"],"file":"section.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/table.css b/preview/pr-29/_styles/table.css deleted file mode 100644 index eb687ccb92..0000000000 --- a/preview/pr-29/_styles/table.css +++ /dev/null @@ -1,21 +0,0 @@ -.table-wrapper { - margin: 40px 0; - overflow-x: auto; -} - -table { - margin: 0 auto; - border-collapse: collapse; -} - -th { - font-weight: var(--semi-bold); -} - -th, -td { - padding: 10px 15px; - border: solid 1px var(--light-gray); -} - -/*# sourceMappingURL=table.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/table.css.map b/preview/pr-29/_styles/table.css.map deleted file mode 100644 index 25e08df6be..0000000000 --- a/preview/pr-29/_styles/table.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["table.scss"],"names":[],"mappings":"AAAA;EACE;EACA;;;AAGF;EACE;EACA;;;AAGF;EACE;;;AAGF;AAAA;EAEE;EACA","sourcesContent":[".table-wrapper {\n margin: 40px 0;\n overflow-x: auto;\n}\n\ntable {\n margin: 0 auto;\n border-collapse: collapse;\n}\n\nth {\n font-weight: var(--semi-bold);\n}\n\nth,\ntd {\n padding: 10px 15px;\n border: solid 1px var(--light-gray);\n}\n"],"file":"table.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/tags.css b/preview/pr-29/_styles/tags.css deleted file mode 100644 index 1225ba410c..0000000000 --- a/preview/pr-29/_styles/tags.css +++ /dev/null @@ -1,33 +0,0 @@ -.tags { - display: inline-flex; - justify-content: center; - align-items: center; - flex-wrap: wrap; - gap: 10px; - max-width: 100%; - margin: 20px 0; -} - -.tag { - max-width: 100%; - margin: 0; - padding: 5px 10px; - border-radius: 999px; - background: var(--secondary); - color: var(--text); - text-decoration: none; - overflow: hidden; - text-overflow: ellipsis; - white-space: nowrap; - transition-property: background, color; -} - -.tag:hover { - background: var(--light-gray); -} - -.tag[data-active] { - background: var(--light-gray); -} - -/*# sourceMappingURL=tags.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/tags.css.map b/preview/pr-29/_styles/tags.css.map deleted file mode 100644 index 82c3531985..0000000000 --- a/preview/pr-29/_styles/tags.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["tags.scss"],"names":[],"mappings":"AAAA;EACE;EACA;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;;;AAGF;EACE;;;AAGF;EACE","sourcesContent":[".tags {\n display: inline-flex;\n justify-content: center;\n align-items: center;\n flex-wrap: wrap;\n gap: 10px;\n max-width: 100%;\n margin: 20px 0;\n}\n\n.tag {\n max-width: 100%;\n margin: 0;\n padding: 5px 10px;\n border-radius: 999px;\n background: var(--secondary);\n color: var(--text);\n text-decoration: none;\n overflow: hidden;\n text-overflow: ellipsis;\n white-space: nowrap;\n transition-property: background, color;\n}\n\n.tag:hover {\n background: var(--light-gray);\n}\n\n.tag[data-active] {\n background: var(--light-gray);\n}\n"],"file":"tags.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/textbox.css b/preview/pr-29/_styles/textbox.css deleted file mode 100644 index d35615b12c..0000000000 --- a/preview/pr-29/_styles/textbox.css +++ /dev/null @@ -1,17 +0,0 @@ -input[type=text] { - width: 100%; - height: 40px; - margin: 0; - padding: 5px 10px; - border: solid 1px var(--light-gray); - border-radius: var(--rounded); - background: var(--background); - color: var(--text); - font-family: inherit; - font-size: inherit; - -webkit-appearance: none; - appearance: none; - box-shadow: var(--shadow); -} - -/*# sourceMappingURL=textbox.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/textbox.css.map b/preview/pr-29/_styles/textbox.css.map deleted file mode 100644 index 9e46f918be..0000000000 --- a/preview/pr-29/_styles/textbox.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["textbox.scss"],"names":[],"mappings":"AAAA;EACE;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA;EACA","sourcesContent":["input[type=\"text\"] {\n width: 100%;\n height: 40px;\n margin: 0;\n padding: 5px 10px;\n border: solid 1px var(--light-gray);\n border-radius: var(--rounded);\n background: var(--background);\n color: var(--text);\n font-family: inherit;\n font-size: inherit;\n -webkit-appearance: none;\n appearance: none;\n box-shadow: var(--shadow);\n}\n"],"file":"textbox.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/tooltip.css b/preview/pr-29/_styles/tooltip.css deleted file mode 100644 index 28b590ebf9..0000000000 --- a/preview/pr-29/_styles/tooltip.css +++ /dev/null @@ -1,72 +0,0 @@ -.tippy-box { - background: var(--background); - color: var(--text); - padding: 7.5px; - text-align: left; - box-shadow: var(--shadow); -} - -.tippy-arrow { - width: 30px; - height: 30px; -} - -.tippy-arrow:before { - width: 10px; - height: 10px; - background: var(--background); - box-shadow: var(--shadow); -} - -.tippy-arrow { - overflow: hidden; - pointer-events: none; -} - -.tippy-box[data-placement=top] .tippy-arrow { - inset: unset; - top: 100%; -} - -.tippy-box[data-placement=bottom] .tippy-arrow { - inset: unset; - bottom: 100%; -} - -.tippy-box[data-placement=left] .tippy-arrow { - inset: unset; - left: 100%; -} - -.tippy-box[data-placement=right] .tippy-arrow { - inset: unset; - right: 100%; -} - -.tippy-arrow:before { - border: unset !important; - transform-origin: center !important; - transform: translate(-50%, -50%) rotate(45deg) !important; -} - -.tippy-box[data-placement=top] .tippy-arrow:before { - left: 50% !important; - top: 0 !important; -} - -.tippy-box[data-placement=bottom] .tippy-arrow:before { - left: 50% !important; - top: 100% !important; -} - -.tippy-box[data-placement=left] .tippy-arrow:before { - left: 0 !important; - top: 50% !important; -} - -.tippy-box[data-placement=right] .tippy-arrow:before { - left: 100% !important; - top: 50% !important; -} - -/*# sourceMappingURL=tooltip.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/tooltip.css.map b/preview/pr-29/_styles/tooltip.css.map deleted file mode 100644 index 6b52e915fb..0000000000 --- a/preview/pr-29/_styles/tooltip.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["tooltip.scss"],"names":[],"mappings":"AAAA;EACE;EACA;EACA;EACA;EACA;;;AAGF;EACE;EACA;;;AAGF;EACE;EACA;EACA;EACA;;;AAIF;EACE;EACA;;;AAEF;EACE;EACA;;;AAEF;EACE;EACA;;;AAEF;EACE;EACA;;;AAEF;EACE;EACA;;;AAEF;EACE;EACA;EACA;;;AAEF;EACE;EACA;;;AAEF;EACE;EACA;;;AAEF;EACE;EACA;;;AAEF;EACE;EACA","sourcesContent":[".tippy-box {\n background: var(--background);\n color: var(--text);\n padding: 7.5px;\n text-align: left;\n box-shadow: var(--shadow);\n}\n\n.tippy-arrow {\n width: 30px;\n height: 30px;\n}\n\n.tippy-arrow:before {\n width: 10px;\n height: 10px;\n background: var(--background);\n box-shadow: var(--shadow);\n}\n\n// correct tippy arrow styles to support intuitive arrow styles above\n.tippy-arrow {\n overflow: hidden;\n pointer-events: none;\n}\n.tippy-box[data-placement=\"top\"] .tippy-arrow {\n inset: unset;\n top: 100%;\n}\n.tippy-box[data-placement=\"bottom\"] .tippy-arrow {\n inset: unset;\n bottom: 100%;\n}\n.tippy-box[data-placement=\"left\"] .tippy-arrow {\n inset: unset;\n left: 100%;\n}\n.tippy-box[data-placement=\"right\"] .tippy-arrow {\n inset: unset;\n right: 100%;\n}\n.tippy-arrow:before {\n border: unset !important;\n transform-origin: center !important;\n transform: translate(-50%, -50%) rotate(45deg) !important;\n}\n.tippy-box[data-placement=\"top\"] .tippy-arrow:before {\n left: 50% !important;\n top: 0 !important;\n}\n.tippy-box[data-placement=\"bottom\"] .tippy-arrow:before {\n left: 50% !important;\n top: 100% !important;\n}\n.tippy-box[data-placement=\"left\"] .tippy-arrow:before {\n left: 0 !important;\n top: 50% !important;\n}\n.tippy-box[data-placement=\"right\"] .tippy-arrow:before {\n left: 100% !important;\n top: 50% !important;\n}\n"],"file":"tooltip.css"} \ No newline at end of file diff --git a/preview/pr-29/_styles/util.css b/preview/pr-29/_styles/util.css deleted file mode 100644 index 995ea77cdd..0000000000 --- a/preview/pr-29/_styles/util.css +++ /dev/null @@ -1,13 +0,0 @@ -.left { - text-align: left; -} - -.center { - text-align: center; -} - -.right { - text-align: right; -} - -/*# sourceMappingURL=util.css.map */ \ No newline at end of file diff --git a/preview/pr-29/_styles/util.css.map b/preview/pr-29/_styles/util.css.map deleted file mode 100644 index c21a68d3fa..0000000000 --- a/preview/pr-29/_styles/util.css.map +++ /dev/null @@ -1 +0,0 @@ -{"version":3,"sourceRoot":"","sources":["util.scss"],"names":[],"mappings":"AAAA;EACE;;;AAGF;EACE;;;AAGF;EACE","sourcesContent":[".left {\n text-align: left;\n}\n\n.center {\n text-align: center;\n}\n\n.right {\n text-align: right;\n}\n"],"file":"util.css"} \ No newline at end of file diff --git a/preview/pr-29/about/index.html b/preview/pr-29/about/index.html deleted file mode 100644 index fe1e8d548d..0000000000 --- a/preview/pr-29/about/index.html +++ /dev/null @@ -1,717 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -About | Software Engineering Team - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - Software Engineering Team - - - CU Dept. of Biomedical Informatics - - - - - - - - -
- -
- - - - - - - - - - - - - -
-

About

- -

We are a small group of dedicated software developers with the Department of Biomedical Informatics at the University of Colorado Anschutz. -We support the labs, faculty, and staff within the Department, as well as external groups via collaboration.

- -

What we do

- -

Our primary focus is creating high quality software and maintaining existing software. -We have a diverse team with a wide range of experience and expertise in software projects related to data-science, biology, medicine, statistics, and machine learning.

- -

We can take a lab’s ideas and scientific work, and turn them into a fully-realized, complete package of software, for both experts and lay-persons alike, that enables exploration of data, dissemination of knowledge, collaboration, advanced analyses, new insights, or lots more you could imagine.

- -

Some of the things we do are:

- - - -

But the best way to understand the things we do is by looking at the code and using the software yourself:

- - -
- - - - - -
- - -

Teaching and communication

- -

Whenever we can, we like to share our knowledge and skills to others. -We believe this benefits the community we operate in and allows us to create better software together.

- -

On this website, we have a blog where we occasionally post tips, tricks, and other insights related to Git, workflows, code quality, and more.

- -

We have given workshops and personalized lessons related to Docker, cloud services, and more. -We’re always happy to set up a session to discuss technical trade whenever someone has the need.

- -

Scope of our work

- -

Being central to the department, and not strictly associated with any particular lab or group within it, we need to ensure that we divide up our time and effort fairly. -While we can do things like build full-stack apps from scratch and maintain complex infrastructure, the projects we take on tend to be small to medium size so that we leave ourselves available to others who need our help. -Certain projects that are very large and long term in scope, such as ones that need to be HIPAA compliant, will fall outside of our purview and might lead you to hire a dedicated developer to fill your needs. -That said, we can still provide partial support as a consulting body, a repository of information, a hiring advisor, and more.

- -

Contact

- -

Request Support

- -

Start here to establish a project and work with us.

- - - -

Book a Meeting

- -

Schedule a meeting with us about an established project. -If you haven’t met with us yet on this particular project, please start by requesting support above.

- - - -

In the notes field, please specify which team members are optional/required for this meeting. -Also list any additional emails, and we’ll forward the calendar invite to them.

- -

Chat

- -

For general questions or technical help, we also have weekly open-office hours, Thursdays at 2:00 PM Mountain Time in the following Zoom room. -Feel free to stop by!

- - - - - -

You can also come to the Zoom room if you’re unsure about something with the requesting support process mentioned above.

- -

The Team

- - - - - - -
- - -
- - - - - - - diff --git a/preview/pr-29/blog/index.html b/preview/pr-29/blog/index.html deleted file mode 100644 index f801903f4e..0000000000 --- a/preview/pr-29/blog/index.html +++ /dev/null @@ -1,2262 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Blog | Software Engineering Team - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - Software Engineering Team - - - CU Dept. of Biomedical Informatics - - - - - - - - -
- -
- - - - - - - - - - - - - -
-

-Blog

- - - -
- -

2024

- -
- - - Tip of the Month: Python Memory Management and Troubleshooting - - - - - - - - - - - - - - - -

- -Have you ever run Python code only to find it taking forever to complete or sometime abruptly ending with an error like: 123456 Killed or killed (program exited with code: 137)? -You may have experienced memory resource or management challenges associated with these scenarios. -This post will cover some computer memory definitions, how Python makes use of computer memory, and share some tools which may help with these types of challenges. - -

-
- -

2023

- -
- - - Tip of the Week: Codesgiving - Open-source Contribution Walkthrough - - - - - - - - - - - - - - - -

- -Thanksgiving is a holiday practiced in many countries which focuses on gratitude for good harvests of the preceding year. -In the United States, we celebrate Thanksgiving on the fourth Thursday of November each year often by eating meals we create together with others. -This post channels the spirit of Thanksgiving by giving our thanks through code as a “Codesgiving”, acknowledging and creating better software together. - -

-
- -
- - - Tip of the Week: Data Quality Validation through Software Testing Techniques - - - - - - - - - - - - - - - -

- -Data orientated software development can benefit from a specialized focus on varying aspects of data quality validation. -We can use software testing techniques to validate certain qualities of the data in order to meet a declarative standard (where one doesn’t need to guess or rediscover known issues). -These come in a number of forms and generally follow existing software testing concepts which we’ll expand upon below. -This article will cover a few tools which leverage these techniques for addressing data quality validation testing. - -

-
- -
- - - Tip of the Week: Python Packaging as Publishing - - - - - - - - - - - - - - - -

- - -Python packaging is the craft of preparing for and reaching distribution of your Python work to wider audiences. Following conventions for packaging help your software work become more understandable, trustworthy, and connected (to others and their work). Taking advantage of common packaging practices also strengthens our collective superpowers: collaboration. This post will cover preparation aspects of packaging, readying software work for wider distribution. - - -

-
- -
- - - Tip of the Week: Using Python and Anaconda with the Alpine HPC Cluster - - - - - - - - - - - - - - - -

- - -This post is intended to help demonstrate the use of Python on Alpine, a High Performance Compute (HPC) cluster hosted by the University of Colorado Boulder’s Research Computing. -We use Python here by way of Anaconda environment management to run code on Alpine. -This readme will cover a background on the technologies and how to use the contents of an example project repository as though it were a project you were working on and wanting to run on Alpine. - - -

-
- -
- - - Tip of the Week: Automate Software Workflows with GitHub Actions - - - - - - - - - - - - - - - -

- - -There are many routine tasks which can be automated to help save time and increase reproducibility in software development. GitHub Actions provides one way to accomplish these tasks using code-based workflows and related workflow implementations. This type of automation is commonly used to perform tests, builds (preparing for the delivery of the code), or delivery itself (sending the code or related artifacts where they will be used). - - -

-
- -
- - - Tip of the Week: Branch, Review, and Learn - - - - - - - - - - - - - - - -

- - -Git provides a feature called branching which facilitates parallel and segmented programming work through commits with version control. Using branching enables both work concurrency (multiple people working on the same repository at the same time) as well as a chance to isolate and review specific programming tasks. This article covers some conceptual best practices with branching, reviewing, and merging code using Github. - - -

-
- -
- - - Tip of the Week: Software Linting with R - - - - - - - - - - - - - - - -

- - -This article covers using the software technique of linting on R code in order to improve code quality, development velocity, and collaboration. - - -

-
- -
- - - Tip of the Week: Timebox Your Software Work - - - - - - - - - - - - - - - -

- - -Programming often involves long periods of problem solving which can sometimes lead to unproductive or exhausting outcomes. This article covers one way to avoid less productive time expense or protect yourself from overexhaustion through a technique called “timeboxing” (also sometimes referenced as “timeblocking”). - - -

-
- -
- - - Tip of the Week: Linting Documentation as Code - - - - - - - - - - - - - - - -

- - -Software documentation is sometimes treated as a less important or secondary aspect of software development. Treating documentation as code allows developers to version control the shared understanding and knowledge surrounding a project. Leveraging this paradigm also enables the use of tools and patterns which have been used to strengthen code maintenance. This article covers one such pattern: linting, or static analysis, for documentation treated like code. - - -

-
- -

2022

- -
- - - Tip of the Week: Remove Unused Code to Avoid Software Decay - - - - - - - - - - - - - - - -

- - -The act of creating software often involves many iterations of writing, personal collaborations, and testing. During this process it’s common to lose awareness of code which is no longer used, and thus may not be tested or otherwise linted. Unused code may contribute to “software decay”, the gradual diminishment of code quality or functionality. This post will cover software decay and strategies for addressing unused code to help keep your code quality high. - - -

-
- -
- - - Tip of the Week: Data Engineering with SQL, Arrow and DuckDB - - - - - - - - - - - - - - - -

- - -Apache Arrow is a language-independent and high performance data format useful in many scenarios. DuckDB is an in-process SQL-based data management system which is Arrow-compatible. In addition to providing a SQLite-like database format, DuckDB also provides a standardized and high performance way to work with Arrow data where otherwise one may be forced to language-specific data structures or transforms. - - -

-
- -
- - - Tip of the Week: Diagrams as Code - - - - - - - - - - - - - - - -

- - -Diagrams can be a useful way to illuminate and communicate ideas. Free-form drawing or drag and drop tools are one common way to create diagrams. With this tip of the week we introduce another option: diagrams as code (DaC), or creating diagrams by using code. - - -

-
- -
- - - Tip of the Week: Use Linting Tools to Save Time - - - - - - - - - - - - - - - -

- - -Have you ever found yourself spending hours formatting your code so it looks just right? Have you ever caught a duplicative import statement in your code? We recommend using open source linting tools to help avoid common issues like these and save time. - - -

-
- - -
- - -
- - - - - - - diff --git a/preview/pr-29/feed.xml b/preview/pr-29/feed.xml deleted file mode 100644 index cc92cff82d..0000000000 --- a/preview/pr-29/feed.xml +++ /dev/null @@ -1,2530 +0,0 @@ -Jekyll2024-01-25T20:56:54+00:00/set-website/preview/pr-29/feed.xmlSoftware Engineering TeamThe software engineering team of the Department of Biomedical Informatics at the University of Colorado AnschutzTip of the Month: Python Memory Management and Troubleshooting2024-01-22T00:00:00+00:002024-01-25T20:55:52+00:00/set-website/preview/pr-29/2024/01/22/Python-Memory-Management-and-TroubleshootingTip of the Week: Python Memory Management and Troubleshooting - -
- - -
- -

Each month we seek to provide a software tip of the month geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out!

- -
-
- -

Introduction

- - -

Have you ever run Python code only to find it taking forever to complete or sometime abruptly ending with an error like: 123456 Killed or killed (program exited with code: 137)? -You may have experienced memory resource or management challenges associated with these scenarios. -This post will cover some computer memory definitions, how Python makes use of computer memory, and share some tools which may help with these types of challenges. -

- -

What is Memory?

- -

Computer Memory

- -

- -

Computer memory is a type of computer resource available for use by software on a computer

- -

Computer memory, also sometimes known as “RAM” or “random-access memory”, or “dynamic memory” is a type of resource used by computer software on a computer. -“Computer memory stores information, such as data and programs for immediate use in the computer. … Main memory operates at a high speed compared to non-memory storage which is slower but less expensive and oftentimes higher in capacity. “ (Wikipedia: Computer memory).

- - - - - - - - - - - - - - -
Memory Blocks
- -A.) All memory blocks available. - - - -
BlockBlockBlock
- -
- -B.) Some memory blocks in use. - - - -
BlockBlockBlock
- -
Practical analogy
- -C.) You have limited buckets to hold things. - - - -
🪣🪣🪣
- -
- -D.) Two buckets are used, the other remains empty. - - - -
🪣🪣🪣
- -
- -

Fixed-size memory blocks may be free or used at various times. They can be thought of like reusable buckets to hold things.

- -

One way to organize computer memory is through the use of “fixed-size blocks”, also called “blocks”. -Fixed-size memory blocks are chunks of memory of a certain byte size (usually all the same size). -Memory blocks may be in use or free at different times.

- -

- -

Memory heaps help organize available memory on a computer for specific procedures. Heaps may have one or many memory pools.

- -

Computer memory blocks may be organized in hierarchical layers to manage memory efficiently or towards a specific purpose. -One top-level organization model for computer memory is through the use of heaps which help describe chunks of the total memory available on a computer for specific processes. -These heaps may be private (only available to a specific software process) or shared (available to one or many software processes). -Heaps are sometimes further segmented into pools which are areas of the heap which can be used for specific purposes.

- -

Memory Allocator

- -

- -

Memory allocators help software reserve and free computer memory resources.

- -

Memory management is a concept which helps enable the shared use of computer memory to avoid challenges such as memory overuse (where all memory is in use and never shared to other software). -Computer memory management often occurs through the use of a memory allocator which controls how computer memory resources are used for software. -Computer software is written to interact with memory allocators to use computer memory. -Memory allocators may be used manually (with specific directions provided on when and how to use memory resources) or automatically (with an algorithmic approach of some kind). -The memory allocator usually performs the following actions with memory (in addition to others):

- -
    -
  • “Allocation”: computer memory resource reservation (taking memory). This is sometimes also known as “alloc”, or “allocate memory”.
  • -
  • “Deallocation”: computer memory resource freeing (giving back memory for other uses). This is sometimes also known as “free”, or “freeing memory from allocation”.
  • -
- -

Garbage Collection

- -

- -

Garbage collectors help free computer memory which is no longer referenced by software.

- -

“Garbage collection (GC)” is used to describe a type of automated memory management. -“The garbage collector attempts to reclaim memory which was allocated by the program, but is no longer referenced; such memory is called garbage.” (Wikipedia: Garbage collection (computer science)). -A garbage collector often works in tandem with a memory allocator to help control computer memory resource usage in software development.

- -

How Does Python Interact with Computer Memory?

- -

Python Overview

- -

- -

A Python interpreter executes Python code and manages memory for Python procedures.

- -

Python is an interpreted “high-level” programming language (Python: What is Python?). -Interpreted languages are those which include an “interpreter” which helps execute code written in a particular way (Wikipedia: Interpreter (computing)). -High-level languages such as Python often remove the requirement for software developers to manually perform memory management (Wikipedia: High-level programming language).

- -

Python code is executed by a commonly pre-packaged and downloaded binary call the Python interpreter. -The Python interpreter reads Python code and performs memory management as the code is executed. -The CPython Python interpreter is the most commonly used interpreter for Python, and what’s use as a reference for other content here. -There are also other interpreters such as PyPy, Jython, and IronPython which all handle memory differently than the CPython interpreter.

- -

Python’s Memory Manager

- -

- -

The Python memory manager helps manage memory for Python code executed by the Python interpreter.

- -

Memory is managed for Python software processes automatically (when unspecified) or manually (when specified) through the Python interpreter. -The Python memory manager is an abstraction which manages memory for Python software processes through the Python interpreter (Python: Memory Management). -From a high-level perspective, we assume variables and other operations written in Python will automatically allocate and deallocate memory through the Python interpreter when executed. -The Python memory manager . -Python’s memory manager performs work through various memory allocators and a garbage collector (or as configured with customizations) within a private Python memory heap.

- -

Python’s Memory Allocators

- -

- -

The Python memory manager by default will use pymalloc internally or malloc from the system to allocate computer memory resources.

- -

The Python memory manager allocates memory for use through memory allocators. -Python may use one or many memory allocators depending on specifications in Python code and how the Python interpreter is configured (for example, see Python: Memory Management - Default Memory Allocators). -One way to understand Python memory allocators is through the following distinctions.

- -
    -
  • “Python Memory Allocator” (pymalloc) -The Python interpreter is packaged with a specialized memory allocator called pymalloc. -“Python has a pymalloc allocator optimized for small objects (smaller or equal to 512 bytes) with a short lifetime.” (Python: Memory Management - The pymalloc allocator). -Ultimately, pymalloc uses C malloc to implement memory work.
  • -
  • C dynamic memory allocator (malloc) -When pymalloc is disabled or a memory requirements exceed pymalloc’s constraints, the Python interpreter will directly use a function from the C standard library called malloc. -When malloc is used by the Python interpreter, it uses the system’s existing implementation of malloc.
  • -
- -

- -

pymalloc makes use of arenas to further organize pools within a computer memory heap.

- -

It’s important to note that pymalloc adds additional abstractions to how memory is organized through the use of “arenas”. -These arenas are specific to pymalloc purposes. -pymalloc may be disabled through the use of a special environment variable called PYTHONMALLOC (for example, to use only malloc as seen below). -This same environment variable may be used with debug settings in order to help troubleshoot in-depth questions.

- -

Additional Python Memory Allocators

- -

- -

Python code and package dependencies may stipulate the use of additional memory allocators, such as mimalloc and jemalloc outside of the Python memory manager.

- -

Python provides the capability of customizing memory allocation through the use of packages. -See below for some notable examples of additional memory allocation possibilities.

- -
    -
  • NumPy Memory Allocation -NumPy uses custom C-API’s which are backed by C dynamic memory allocation functions (alloc, free, realloc) to help address memory management. -These interfaces can be controlled directly through NumPy to help manage memory effectively when using the package.
  • -
  • PyArrow Memory Allocators -PyArrow provides the capability to use malloc, jemalloc, or mimalloc through the PyArrow Memory Pools group of functions. -A default memory allocator is selected for use when PyArrow based on the operating system and the availability of the memory allocator on the system. -The selection of a memory allocator for use with PyArrow can be influenced by how it performs on a particular system.
  • -
- -

Python Reference Counting

- - - - - - - - - - - - - - - - -</table> - -_Python reference counting at a simple level works through the use of object reference increments and decrements._ - -As computer memory is allocated to Python processes the Python memory manager keeps track of these through the use of a [reference counter](https://en.wikipedia.org/wiki/Reference_counting). -In Python, we could label this as an "Object reference counter" because all data in Python is represented by objects ([Python: Data model](https://docs.python.org/3/reference/datamodel.html#objects-values-and-types)). -"... CPython counts how many different places there are that have a reference to an object. Such a place could be another object, or a global (or static) C variable, or a local variable in some C function." ([Python Developer's Guide: Garbage collector design](https://devguide.python.org/internals/garbage-collector/)). - -### Python's Garbage Collection - - - -_The Python garbage collector works as part of the Python memory manager to free memory which is no longer needed (based on reference count)._ - -Python by default uses an optional garbage collector to automatically deallocate garbage memory through the Python interpreter in CPython. -"When an object’s reference count becomes zero, the object is deallocated." ([Python Developer's Guide: Garbage collector design](https://devguide.python.org/internals/garbage-collector/)) -Python's garbage collector focuses on collecting garbage created by `pymalloc`, C memory functions, as well as other memory allocators like `mimalloc` and `jemalloc`. - -## Python Tools for Observing Memory Behavior - -### Python Built-in Tools - -```python -import gc -import sys - -# set gc in debug mode for detecting memory leaks -gc.set_debug(gc.DEBUG_LEAK) - -# create an int object -an_object = 1 - -# show the number of uncollectable references via COLLECTED -COLLECTED = gc.collect() -print(f"Uncollectable garbage references: {COLLECTED}") - -# show the reference count for an object -print(f"Reference count of `an_object`: {sys.getrefcount(an_object)}") -``` - -The [`gc` module](https://docs.python.org/3/library/gc.html) provides an interface to the Python garbage collector. -In addition, the [`sys` module](https://docs.python.org/3/library/sys.html) provides many functions which provide information about references and other details about Python objects as they are executed through the interpreter. -These functions and other packages can help software developers observe memory behaviors within Python procedures. - -### Python Package: Scalene - -
- - Scalene provides a web interface to analyze memory, CPU, and GPU resource consumption in one spot alongside suggested areas of concern. - - -
- Scalene provides a web interface to analyze memory, CPU, and GPU resource consumption in one spot alongside suggested areas of concern. - -
- -
- - -[Scalene](https://github.com/plasma-umass/scalene) is a Python package for analyzing memory, CPU, and GPU resource consumption. -It provides [a web interface](https://github.com/plasma-umass/scalene?tab=readme-ov-file#web-based-gui) to help visualize and understand how resources are consumed. -Scalene provides suggestions on which portions of your code to troubleshoot through the web interface. -Scalene can also be configured to work with [OpenAI](https://en.wikipedia.org/wiki/OpenAI) [LLM's](https://en.wikipedia.org/wiki/Large_language_model) by way of a an [OpenAI API provided by the user](https://github.com/plasma-umass/scalene?tab=readme-ov-file#ai-powered-optimization-suggestions). - -### Python Package: Memray - -
- - Memray provides the ability to create and view flamegraphs which show how memory was consumed as a procedure executed. - - -
- Memray provides the ability to create and view flamegraphs which show how memory was consumed as a procedure executed. - -
- -
- - -[Memray](https://github.com/bloomberg/memray) is a Python package to track memory allocation within Python and compiled extension modules. -Memray provides a high-level way to investigate memory performance and adds visualizations such as [flamegraphs](https://www.brendangregg.com/flamegraphs.html)(which contextualization of [stack traces](https://en.wikipedia.org/wiki/Stack_trace) and memory allocations in one spot). -Memray seeks to provide a way to overcome challenges with tracking and understanding Python and other memory allocators (such as C, C++, or Rust libraries used in tandem with a Python process). - -## Concluding Thoughts - -It's worth mentioning that this article covers only a small fraction of how and what memory is as well as how Python might make use of it. -Hopefully it clarifies the process and provides a way to get started with investigating memory within the software you work with. -Wishing you the very best in your software journey with memory! -
Processed line of codeReference count
- -```python -a_string = "cornucopia" -``` - - -a_string: 1 -
- -```python -reference_a_string = a_string -``` - - -a_string: 2
-(Because `a_string` is now referenced twice.) -
- -```python -del reference_a_string -``` - - -a_string: 1
-(Because the additional reference has been deleted.) -
]]>
dave-bunten
Tip of the Week: Codesgiving - Open-source Contribution Walkthrough2023-11-15T00:00:00+00:002024-01-25T20:55:52+00:00/set-website/preview/pr-29/2023/11/15/Codesgiving-Open-source-Contribution-WalkthroughTip of the Week: Codesgiving - Open-source Contribution Walkthrough - -
- - -
- -

Each week we seek to provide a software tip of the week geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out to -#software-engineering on Slack or email DBMISoftwareEngineering at olucdenver.onmicrosoft.com

- -
-
- -

Introduction

- -
- - What good harvests from open-source have you experienced this year? - - -
- What good harvests from open-source have you experienced this year? - -
- -
- - -

Thanksgiving is a holiday practiced in many countries which focuses on gratitude for good harvests of the preceding year. -In the United States, we celebrate Thanksgiving on the fourth Thursday of November each year often by eating meals we create together with others. -This post channels the spirit of Thanksgiving by giving our thanks through code as a “Codesgiving”, acknowledging and creating better software together. -

- -

Giving Thanks to Open-source Harvests

- -

- -

Part of building software involves the use of code which others have built, maintained, and distributed for a wider audience. -Using other people’s work often comes in the form of open-source “harvesting” as we find solutions to software challenges we face. -Examples might include installing and depending upon Python packages from PyPI or R packages from CRAN within your software projects.

- -
-

“Real generosity toward the future lies in giving all to the present.” -- Albert Camus

-
- -

These open-source projects have internal costs which are sometimes invisible to those who consume them. -Every software project has an implied level of software gardening time costs involved to impede decay, practice continuous improvements, and evolve the work. -One way to actively share our thanks for the projects we depend on is through applying our time towards code contributions on them.

- -

Many projects are in need of additional people’s thinking and development time. -Have you ever noticed something that needs to be fixed or desirable functionality in a project you use? -Consider adding your contributions to open-source!

- -

All Contributions Matter

- -

- -

Contributing to open-source can come in many forms and contributions don’t need to be gigantic to make an impact. -Software often involves simplifying complexity. -Simplification requires many actions beyond solely writing code. -For example, a short walk outside, a conversation with someone, or a nap can sometimes help us with breakthroughs when it comes to development. -By the same token, open-source benefits greatly from communications on discussion boards, bug or feature descriptions, or other work that might not be strictly considered “engineering”.

- -

An Open-source Contribution Approach

- -

- -

The troubleshooting process as a workflow involving looped checks for verifying an issue and validating the solution fixes an issue.

- -

It can feel overwhelming to find a way to contribute to open-source. -Similar to other software methodology, modularizing your approach can help you progress without being overwhelmed. -Using a troubleshooting approach like the above can help you break down big challenges into bite-sized chunks. -Consider each step as a “module” or “section” which needs to be addressed sequentially.

- -

Embrace a Learning Mindset

- -
-

“Before you speak ask yourself if what you are going to say is true, is kind, is necessary, is helpful. If the answer is no, maybe what you are about to say should be left unsaid.” -- Bernard Meltzer

-
- -

Open-source contributions almost always entail learning of some kind. -Many contributions happen solely in the form of code and text communications which are easily misinterpreted. -Assume positive intent and accept input from others while upholding your own ideas to share successful contributions together. -Prepare yourself by intentionally opening your mind to input from others, even if you’re sure you’re absolutely “right”.

- -
- - -
- -

Before communicating, be sure to use Bernard Meltzer’s self-checks mentioned above.

- -
    -
  1. Is what I’m about to say true? -
      -
    • Have I taken time to verify the claims in a way others can replicate or understand?
    • -
    -
  2. -
  3. Is what I’m about to say kind? -
      -
    • Does my intention and communication channel kindness (and not cruelty)?
    • -
    -
  4. -
  5. Is what I’m about to say necessary? -
      -
    • Do my words and actions here enable or enhance progress towards a goal (would the outcome be achieved without them)?
    • -
    -
  6. -
  7. Is what I’m about to say helpful? -
      -
    • How does my communication increase the quality or sustainability of the project (or group)?
    • -
    -
  8. -
- -
-
- -

Setting Software Scheduling Expectations

- - - - - - - -
- - - -

Suggested ratio of time spent by type of work for an open-source contribution.

- -
    -
  1. 1/3 planning (~33%)
  2. -
  3. 1/6 coding (~16%)
  4. -
  5. 1/4 component and system testing (25%)
  6. -
  7. 1/4 code review, revisions, and post-actions (25%)
  8. -
- -

This modified rule of thumb from The Mythical Man Month can assist with how you structure your time for an open-source contribution. -Notice the emphasis on planning and testing and keep these in mind as you progress (the actual programming time can be small if adequate time has been spent on planning). -Notably, the original time fractions are modified here with the final quarter of the time spent suggested as code review, revisions, and post-actions. -Planning for the time expense of the added code review and related elements assists with keeping a learning mindset throughout the process (instead of feeling like the review is a “tack-on” or “optional / supplementary”). -A good motto to keep in mind throughout this process is Festina lente, or “Make haste, slowly.” (take care to move thoughtfully and as slowly as necessary to do things correctly the first time).

- -

Planning an Open-source Contribution

- -

Has the Need Already Been Reported?

- -

- -

Be sure to check whether the bug or feature has already been reported somewhere! -In a way, this is a practice of “Don’t repeat yourself” (DRY) where we attempt to avoid repeating the same block of code (in this case, the “code” can be understood as natural language). -For example, you can look on GitHub Issues or GitHub Discussions with a search query matching the rough idea of what you’re thinking about. -You can also use the GitHub search bar to automatically search multiple areas (including Issues, Discussions, Pull Requests, etc.) when you enter a query from the repository homepage. -If it has been reported already, take a look to see if someone has made a code contribution related to the work already.

- -

An open discussion or report of the need doesn’t guarantee someone’s already working on a solution. -If there aren’t yet any code contributions and it doesn’t look like anyone is working on one, consider volunteering to take a further look into the solution and be sure to acknowledge any existing discussions. -If you’re unsure, it’s always kind to mention your interest in the report and ask for more information.

- -

Is the Need a Bug or Feature?

- - - - -
- - - -
- -

One way to help solidify your thinking and the approach is to consider whether what you’re proposing is a bug or a feature. -A software bug is considered something which is broken or malfunctioning. -A software feature is generally considered new functionality or a different way of doing things than what exists today. -There’s often overlap between these, and sometimes they can inspire branching needs, but individually they usually are more of one than the other. -If you can’t decide whether your need is a bug or a feature, consider breaking it down into smaller sub-components so they can be more of one or the other. -Following this strategy will help you communicate the potential for contribution and also clarify the development process (for example, a critical bug might be prioritized differently than a nice-to-have new feature).

- -

Reporting the Need for Change

- -
# Using `function_x` with `library_y` causes `exception_z`
-
-## Summary
-
-As a `library_y` research software developer I want to use `function_x` 
-for my data so that I can share data for research outcomes.
-
-## Reproducing the error
-
-This error may be seen using Python v3.x on all major OS's using
-the following code snippet:
-...
-
-
- -

An example of a user story issue report with imagined code example.

- -

Open-source needs are often best reported through written stories captured within a bug or feature tracking system (such as GitHub Issues) which if possible also include example code or logs. -One template for reporting issues is through a “user story”. -A user story typically comes in the form: As a < type of user >, I want < some goal > so that < some reason >. (Mountain Goat Software: User Stories). -Alongside the story, it can help to add in a snippet of code which exemplifies a problem, new functionality, or a potential adjacent / similar solution. -As a general principle, be as specific as you can without going overboard. -Include things like programming language version, operating system, and other system dependencies that might be related.

- -

Once you have a good written description of the need, be sure to submit it where it can be seen by the relevant development community. -For GitHub-based work, this is usually a GitHub Issue, but can also entail discussion board posts to gather buy-in or consensus before proceeding. -In addition to the specifics outlined above, also recall the learning mindset and Bernard Meltzer’s self-checks, taking time to acknowledge especially the potential challenges and already attempted solutions associated with the description (conveying kindness throughout).

- -

What Happens After You Submit a Bug or Feature Report?

- -

- -

When making open-source contributions, sometimes it can also help to mention that you’re interested in resolving the issue through a related pull request and review. -Oftentimes open-source projects welcome new contributors but may have specific requirements. -These requirements are usually spelled out within a CONTRIBUTING.md document found somewhere in the repository or the organization level documentation. -It’s also completely okay to let other contributors build solutions for the issue (like we mentioned before, all contributions matter, including the reporting of bugs or features themselves)!

- -

Developing and Testing an Open-source Contribution

- -

Creating a Development Workspace

- -

- -

Once ready to develop a solution for the reported need in the open-source project you’ll need a place to version your updates. -This work generally takes place through version control on focused branches which are named in a way that relates to the focus. -When working on GitHub, this work also commonly takes place on forked repository copies. -Using these methods helps isolate your changes from other work that takes place within the project. -It also can help you track your progress alongside related changes that might take place before you’re able to seek review or code merges.

- -

Bug or Feature Verification with Test-driven Development

- -
- - -
- -

One can use a test-driven development approach as numbered steps (Wikipedia).

- -
-
    -
  1. Add or modify a test which checks for a bug fix or feature addition
  2. -
  3. Run all tests (expecting the newly added test content to fail)
  4. -
  5. Write a simple version of code which allows the tests to succeed
  6. -
  7. Verify that all tests now pass
  8. -
  9. Return to step 3, refactoring the code as needed
  10. -
-
- - -
-
- -

If you decide to develop a solution for what you reported, one software strategy which can help you remain focused and objective is test-driven development. -Using this pattern sets a “cognitive milestone” for you as you develop a solution to what was reported. -Open-source projects can have many interesting components which could take time and be challenging to understand. -The addition of the test and related development will help keep you goal-orientated without getting lost in the “software forest” of a project.

- -

Prefer Simple Over Complex Changes

- -
-

… -Simple is better than complex. -Complex is better than complicated. -… -- PEP 20: The Zen of Python

-
- -

Further channeling step 3. from test-driven development above, prefer simple changes over more complex ones (recognizing that the absolute simplest can take iteration and thought). -Some of the best solutions are often the most easily understood ones (where the code addition or changes seem obvious afterwards). -A “simplest version” of the code can often be more quickly refactored and completed than devising a “perfect” solution the first time. -Remember, you’ll very likely have the help of a code review before the code is merged (expect to learn more and add changes during review!).

- -

It might be tempting to address more than one bug or feature at the same time. -Avoid feature creep as you build solutions - stay focused on the task at hand! -Take note of things you notice on your journey to address the reported needs. -These can be become additional reported bugs or features which could be addressed later. -Staying focused with your development will save you time, keep your tests constrained, and (theoretically) help reduce the time and complexity of code review.

- -

Developing a Solution

- -

- -

Once you have a test in place for the bug fix or feature addition it’s time to work towards developing a solution. -If you’ve taken time to accomplish the prior steps before this point you may already have a good idea about how to go about a solution. -If not, spend some time investigating the technical aspects of a solution, optionally adding this information to the report or discussion content for further review before development. -Use timeboxing techniques to help make sure the time you spend in development is no more than necessary.

- -

Code Review, Revisions, and Post-actions

- -

Pull Requests and Code Review

- -

When your code and new test(s) are in a good spot it’s time to ask for a code review. -It might feel tempting to perfect the code. -Instead, consider whether the code is “good enough” and would benefit from someone else providing feedback. -Code review takes advantage of a strength of our species: collaborative & multi-perspectival thinking. -Leverage this in your open-source experience by seeking feedback when things feel “good enough”.

- -
- - - -

Demonstrating Pareto Principle “vital few” through a small number of changes to achieve 80% of the value associated with the needs.

- -

One way to understand “good enough” is to assess whether you have reached what the Pareto Principle terms as the “vital few” causes. -The Pareto Principle states that roughly 80% of consequences come from 20% of causes (the “vital few”). -What are the 20% changes (for example, as commits) which are required to achieve 80% of the desired intent for development with your open-source contribution? -When you reach those 20% of the changes, consider opening a pull request to gather more insight about whether those changes will suffice and how the remaining effort might be spent.

- -

As you go through the process of opening a pull request, be sure to follow the open-source CONTRIBUTING.md document documentation related to the project; each one can vary. -When working on GitHub-based projects, you’ll need to open a pull request on the correct branch (usually upstream main). -If you used a GitHub issue to help report the issue, mention the issue in the pull request description using the #issue number (for example #123 where the issue link would look like: https://github.com/orgname/reponame/issues/123) reference to help link the work to the reported need. -This will cause the pull request to show up within the issue and automatically create a link to the issue from the pull request.

- -

Code Revisions

- -
-

“Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.” -- Antoine de Saint-Exupery

-
- -

You may be asked to update your code based on automated code quality checks or reviewer request. -Treat these with care; embrace learning and remember that this step can take 25% of the total time for the contribution. -When working on GitHub forks or branches, you can make additional commits directly on the development branch which was used for the pull request. -If your reviewers requested changes, re-request their review once changes have been made to help let them know the code is ready for another look.

- -

Post-actions and Tidying Up Afterwards

- -

- -

Once the code has been accepted by the reviewers and through potential automated testing suite(s) the content is ready to be merged. -Oftentimes this work is completed by core maintainers of the project. -After the code is merged, it’s usually a good idea to clean up your workspace by deleting your development branch and syncing with the upstream repository. -While it’s up to core maintainers to decide on report closure, typically the reported need content can be closed and might benefit from a comment describing the fix. -Many of these steps are considered common courtesy but also, importantly, assist in setting you up for your next contributions!

- -

Concluding Thoughts

- -

Hopefully the above helps you understand the open-source contribution process better. -As stated earlier, every little part helps! -Best wishes on your open-source journey and happy Codesgiving!

- -

References

- -
    -
  • Top Image: Französischer Obstgarten zur Erntezeit (Le verger) by Charles-François Daubigny (cropped). (Source: Wikimedia Commons)
  • -
]]>
dave-bunten
Tip of the Week: Data Quality Validation through Software Testing Techniques2023-10-04T00:00:00+00:002024-01-25T20:55:52+00:00/set-website/preview/pr-29/2023/10/04/Data-Quality-ValidationTip of the Week: Data Quality Validation through Software Testing Techniques - -
- - -
- -

Each week we seek to provide a software tip of the week geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out to -#software-engineering on Slack or email DBMISoftwareEngineering at olucdenver.onmicrosoft.com

- -
-
- -

TLDR (too long, didn’t read);

- -

Implement data quality validation through software testing approaches which leverage ideas surrounding Hoare triples and Design by contract (DbC). Balancing reusability through component-based design data testing with Great Expectations or Assertr. For greater specificity in your data testing, use database schema-like verification through Pandera or a JSON Schema validator. When possible, practice shift-left testing on data sources by through the concept of “database(s) as code” via tools like Data Version Control (DVC) and Flyway.

- -

Introduction

- -

- -

Diagram showing input, in-process data, and output data as a workflow.

- - -

Data orientated software development can benefit from a specialized focus on varying aspects of data quality validation. -We can use software testing techniques to validate certain qualities of the data in order to meet a declarative standard (where one doesn’t need to guess or rediscover known issues). -These come in a number of forms and generally follow existing software testing concepts which we’ll expand upon below. -This article will cover a few tools which leverage these techniques for addressing data quality validation testing. -

-

Data Quality Testing Concepts

- -

Hoare Triple

- -

- -

One concept we’ll use to present these ideas is Hoare logic, which is a system for reasoning on software correctness. -Hoare logic includes the idea of a Hoare triple ($ {\displaystyle {P}C{Q}} $) where $ {\displaystyle {P}} $ is an assertion of precondition, $ {\displaystyle \ C} $ is a command, and $ {\displaystyle {Q}} $ is a postcondition assertion. -Software development using data often entails (sometimes assumed) assertions of precondition from data sources, a transformation or command which changes the data, and a (sometimes assumed) assertion of postcondition in a data output or result.

- -

Design by Contract

- -

- -

Data testing through design by contract over Hoare triple.

- -

Hoare logic and Software correctness help describe design by contract (DbC), a software approach involving the formal specification of “contracts” which help ensure we meet our intended goals. -DbC helps describe how to create assertions when proceeding through Hoare triplet states for data. -These concepts provide a framework for thinking about the tools mentioned below.

- -

Data Component Testing

- -

- -

Diagram showing data contracts as generalized and reusable “component” testing being checked through contracts and raising an error if they aren’t met or continuing operations if they are met.

- -

We often need to verify a certain component’s surrounding data in order to ensure it meets minimum standards. -The word “component” is used here from the context of component-based software design to group together reusable, modular qualities of the data where sometimes we don’t know (or want) to specify granular aspects (such as schema, type, column name, etc). -These components often are implied by software which will eventually use the data, which can emit warnings or errors when they find the data does not meet these standards. -Oftentimes these components are contracts checking postconditions of earlier commands or procedures, ensuring the data we receive is accurate to our intention. -We can avoid these challenges by creating contracts for our data to verify the components of the result before it reaches later stages.

- -

Examples of these data components might include:

- -
    -
  • The dataset has no null values.
  • -
  • The dataset has no more than 3 columns.
  • -
  • The dataset has a column called numbers which includes numbers in the range of 0-10.
  • -
- -

Data Component Testing - Great Expectations

- -
"""
-Example of using Great Expectations
-Referenced with modifications from: 
-https://docs.greatexpectations.io/docs/tutorials/quickstart/
-"""
-import great_expectations as gx
-
-# get gx DataContext
-# see: https://docs.greatexpectations.io/docs/terms/data_context
-context = gx.get_context()
-
-# set a context data source 
-# see: https://docs.greatexpectations.io/docs/terms/datasource
-validator = context.sources.pandas_default.read_csv(
-    "https://raw.githubusercontent.com/great-expectations/gx_tutorials/main/data/yellow_tripdata_sample_2019-01.csv"
-)
-
-# add and save expectations 
-# see: https://docs.greatexpectations.io/docs/terms/expectation
-validator.expect_column_values_to_not_be_null("pickup_datetime")
-validator.expect_column_values_to_be_between("passenger_count", auto=True)
-validator.save_expectation_suite()
-
-# checkpoint the context with the validator
-# see: https://docs.greatexpectations.io/docs/terms/checkpoint
-checkpoint = context.add_or_update_checkpoint(
-    name="my_quickstart_checkpoint",
-    validator=validator,
-)
-
-# gather checkpoint expectation results
-checkpoint_result = checkpoint.run()
-
-# show the checkpoint expectation results
-context.view_validation_result(checkpoint_result)
-
- -

Example code leveraging Python package Great Expectations to perform various data component contract validation.

- -

Great Expectations is a Python project which provides data contract testing features through the use of component called “expectations” about the data involved. -These expectations act as a standardized way to define and validate the component of the data in the same way across different datasets or projects. -In addition to providing a mechanism for validating data contracts, Great Expecations also provides a way to view validation results, share expectations, and also build data documentation. -See the above example for a quick code reference of how these work.

- -

Data Component Testing - Assertr

- -
# Example using the Assertr package
-# referenced with modifications from:
-# https://docs.ropensci.org/assertr/articles/assertr.html
-library(dplyr)
-library(assertr)
-
-# set our.data to reference the mtcars dataset
-our.data <- mtcars
-
-# simulate an issue in the data for contract specification
-our.data$mpg[5] <- our.data$mpg[5] * -1
-
-# use verify to validate that column mpg >= 0
-our.data %>%
-  verify(mpg >= 0)
-
-# use assert to validate that column mpg is within the bounds of 0 to infinity
-our.data %>%
-  assert(within_bounds(0,Inf), mpg)
-
- -

Example code leveraging R package Assertr to perform various data component contract validation.

- -

Assertr is an R project which provides similar data component assertions in the form of verify, assert, and insist methods (see here for more documentation). -Using Assertr enables a similar but more lightweight functionality to that of Great Expectations. -See the above for an example of how to use it in your projects.

- -

Data Schema Testing

- -

- -

Diagram showing data contracts as more granular specifications via “schema” testing being checked through contracts and raising an error if they aren’t met or continuing operations if they are met.

- -

Sometimes we need greater specificity than what a data component can offer. -We can use data schema testing contracts in these cases. -The word “schema” here is used from the context of database schema, but oftentimes these specifications are suitable well beyond solely databases (including database-like formats like dataframes). -While reuse and modularity are more limited with these cases, they can be helpful for efforts where precision is valued or necessary to accomplish your goals. -It’s worth mentioning that data schema and component testing tools often have many overlaps (meaning you can interchangeably use them to accomplish both tasks).

- -

Data Schema Testing - Pandera

- -
"""
-Example of using the Pandera package
-referenced with modifications from:
-https://pandera.readthedocs.io/en/stable/try_pandera.html
-"""
-import pandas as pd
-import pandera as pa
-from pandera.typing import DataFrame, Series
-
-
-# define a schema
-class Schema(pa.DataFrameModel):
-    item: Series[str] = pa.Field(isin=["apple", "orange"], coerce=True)
-    price: Series[float] = pa.Field(gt=0, coerce=True)
-
-
-# simulate invalid dataframe
-invalid_data = pd.DataFrame.from_records(
-    [{"item": "applee", "price": 0.5}, 
-     {"item": "orange", "price": -1000}]
-)
-
-
-# set a decorator on a function which will
-# check the schema as a precondition
-@pa.check_types(lazy=True)
-def precondition_transform_data(data: DataFrame[Schema]):
-    print("here")
-    return data
-
-
-# precondition schema testing
-try:
-    precondition_transform_data(invalid_data)
-except pa.errors.SchemaErrors as schema_excs:
-    print(schema_excs)
-
-# inline or implied postcondition schema testing
-try:
-    Schema.validate(invalid_data)
-except pa.errors.SchemaError as schema_exc:
-    print(schema_exc)
-
- -

Example code leveraging Python package Pandera to perform various data schema contract validation.

- -

DataFrame-like libraries like Pandas can verified using schema specification contracts through Pandera (see here for full DataFrame library support). -Pandera helps define specific columns, column types, and also has some component-like features. -It leverages a Pythonic class specification, similar to data classes and pydantic models, making it potentially easier to use if you already understand Python and DataFrame-like libraries. -See the above example for a look into how Pandera may be used.

- -

Data Schema Testing - JSON Schema

- -
# Example of using the jsonvalidate R package.
-# Referenced with modifications from:
-# https://docs.ropensci.org/jsonvalidate/articles/jsonvalidate.html
-
-schema <- '{
-  "$schema": "https://json-schema.org/draft/2020-12/schema",
-  "title": "Hello World JSON Schema",
-  "description": "An example",
-  "type": "object",
-  "properties": {
-    "hello": {
-      "description": "Provide a description of the property here",
-      "type": "string"
-    }
-  },
-  "required": [
-    "hello"
-  ]
-}'
-
-# create a schema contract for data
-validate <- jsonvalidate::json_validator(schema, engine = "ajv")
-
-# validate JSON using schema specification contract and invalid data
-validate("{}")
-
-# validate JSON using schema specification contract and valid data
-validate("{'hello':'world'}")
-
- -

JSON Schema provides a vocabulary way to validate schema contracts for JSON documents. -There are several implementations of the vocabulary, including Python package jsonschema, and R package jsonvalidate. -Using these libraries allows you to define pre- or postcondition data schema contracts for your software work. -See above for an R based example of using this vocabulary to perform data schema testing.

- -

Shift-left Data Testing

- -

- -

Earlier portions of this article have covered primarily data validation of command side-effects and postconditions. -This is commonplace in development where data sources usually are provided without the ability to validate their precondition or definition. -Shift-left testing is a movement which focuses on validating earlier in the lifecycle if and when possible to avoid downstream issues which might occur.

- -

Shift-left Data Testing - Data Version Control (DVC)

- -

- -

Data sources undergoing frequent changes become difficult to use because we oftentimes don’t know when the data is from or what version it might be. -This information is sometimes added in the form of filename additions or an update datetime column in a table. -Data Version Control (DVC) is one tool which is specially purposed to address this challenge through source control techniques. -Data managed by DVC allows software to be built in such a way that version preconditions are validated before reaching data transformations (commands) or postconditions.

- -

Shift-left Data Testing - Flyway

- -

- -

Database sources can leverage an idea nicknamed “database as code” (which builds on a similar idea about infrastructure as code) to help declare the schema and other elements of a database in the same way one would code. -These ideas apply to both databases and also more broadly through DVC mentioned above (among other tools) via the concept “data as code”. -Implementing this idea has several advantages from source versioning, visibility, and replicability. -One tool which implements these ideas is Flyway which can manage and implement SQL-based files as part of software data precondition validation. -A lightweight alternative to using Flyway is sometimes to include a SQL file which creates related database objects and becomes data documentation.

]]>
dave-bunten
Tip of the Week: Python Packaging as Publishing2023-09-05T00:00:00+00:002024-01-25T20:55:52+00:00/set-website/preview/pr-29/2023/09/05/Python-Packaging-as-PublishingTip of the Week: Python Packaging as Publishing - -
- - -
- -

Each week we seek to provide a software tip of the week geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out to -#software-engineering on Slack or email DBMISoftwareEngineering at olucdenver.onmicrosoft.com

- -
-
- - - -

Python packaging is the craft of preparing for and reaching distribution of your Python work to wider audiences. Following conventions for packaging help your software work become more understandable, trustworthy, and connected (to others and their work). Taking advantage of common packaging practices also strengthens our collective superpowers: collaboration. This post will cover preparation aspects of packaging, readying software work for wider distribution.

- - - -

TLDR (too long, didn’t read);

- -

Use Pythonic packaging tools and techniques to help avoid code decay and unwanted code smells and increase your development velocity. Increase understanding with unsurprising directory structures like those exhibited in pypa/sampleproject or scientific-python/cookie. Enhance trust by being authentic on source control systems like GitHub (by customizing your profile), staying up to date with the latest supported versions of Python, and using security linting tools like PyCQA/bandit through visible + automated GitHub Actions ✅ checks. Connect your projects to others using CITATION.cff files, CONTRIBUTING.md files, and using environment + packaging tools like poetry to help others reproduce the same results from your code.

- -

Why practice packaging?

- -
- - How are a page with some text and a book different? - - -
- How are a page with some text and a book different? - -
- -
- -

The practice of Python packaging efforts is similar to that of publishing a book. Consider how a bag of text is different from a book. How and why are these things different?

- -
    -
  • A book has commonly understood sequencing of content (i.e. copyright page, then title page, then body content pages…).
  • -
  • A book often cites references and acknowledges other work explicitly.
  • -
  • A book undergoes a manufacturing process which allows the text to be received in many places the same way.
  • -
- -
- - Code undergoing packaging to achieve understanding, trust, and connection for an audience. - - -
- Code undergoing packaging to achieve understanding, trust, and connection for an audience. - -
- -
- -

These can be thought of metaphors when it comes to packaging in Python. Books have a smell which sometimes comes from how it was stored, treated, or maintained. While there are pleasant book smells, they might also smell soggy from being left in the rain or stored without maintenance for too long. Just like books, software can sometimes have negative code smells indicating a lack of care or less sustainable condition. Following good packaging practices helps to avoid unwanted code smells while increasing development velocity, maintainability of software through understandability, trustworthiness of the content, and connection to other projects.

- -
- - -
- -

Note: these techniques can also work just as well for inner source collaboration (private or proprietary development within organizations)! Don’t hesitate to use these on projects which may not be public facing in order to make development and maintenance easier (if only for you).

- -
-
- -
- - -
- -

“Wait, what are Python packages?”

- -
my_package/
-│   __init__.py
-│   module_a.py
-│   module_b.py
-
- -

A Python package is a collection of modules (.py files) that usually include an “initialization file” __init__.py. This post will cover the craft of packaging which can include one or many packages.

- -
-
- -

Understanding: common directory structures

- -
project_directory
-├── README.md
-├── LICENSE.txt
-├── pyproject.toml
-├── docs
-│   └── source
-│       └── index.md
-├── src
-│   └── package_name
-│       └── __init__.py
-│       └── module_a.py
-└── tests
-    └── __init__.py
-    └── test_module_a.py
-
- -

Python Packaging today generally assumes a specific directory design. -Following this convention generally improves the understanding of your code. We’ll cover each of these below.

- -

Project root files

- -
project_directory
-├── README.md
-├── LICENSE.txt
-├── pyproject.toml
-│ ...
-
- -
    -
  • The README.md file is a markdown file with documentation including project goals and other short notes about installation, development, or usage. The README.md file is akin to a book jacket blurb which quickly tells the audience what the book will be about.
  • -
  • The LICENSE.txt file is a text file which indicates licensing details for the project. It often includes information about how it may be used and protects the authors in disputes. The LICENSE.txt file can be thought of like a book’s copyright page. See https://choosealicense.com/ for more details on selecting an open source license.
  • -
  • The pyproject.toml file is a Python-specific TOML file which helps organize how the project is used and built for wider distribution. The pyproject.toml file is similar to a book’s table of contents, index, and printing or production specification.
  • -
- -

Project sub-directories

- -
project_directory
-│ ...
-├── docs
-│   └── source
-│       └── index.md
-├── src
-│   └── package_name
-│       └── __init__.py
-│       └── module_a.py
-└── tests
-    └── __init__.py
-    └── test_module_a.py
-
- -
    -
  • The docs directory is used for in-depth documentation and related documentation build code (for example, when building documentation websites, aka “docsites”). The docs directory includes information similar to a book’s “study guide”, providing content surrounding how to best make use of and understand the content found within.
  • -
  • The src directory includes primary source code for use in the project. Python projects generally use a nested package directory with modules and sub-packages. The src directory is like a book’s body or general content (perhaps thinking of modules as chapters or sections of related ideas).
  • -
  • The tests directory includes testing code for validating functionality of code found in the src directory. The above follows pytest conventions. The tests directory is for code which acts like a book’s early reviewers or editors, making sure that if you change things in src the impacts remain as expected.
  • -
- -

Common directory structure examples

- -

The Python directory structure described above can be witnessed in the wild from the following resources. These can serve as a great resource for starting or adjusting your own work.

- - - -

Trust: building audience confidence

- -
- - How much does your audience trust your work?. - - -
- How much does your audience trust your work?. - -
- -
- -

Building an understandable body of content helps tremendously with audience trust. What else can we do to enhance project trust? The following elements can help improve an audience’s trust in packaged Python work.

- -

Source control authenticity

- -
- - Comparing the difference between a generic or anonymous user and one with greater authenticity. - - -
- Comparing the difference between a generic or anonymous user and one with greater authenticity. - -
- -
- -

Be authentic! Fill out your profile to help your audience know the author and why you do what you do. See here for GitHub’s documentation on filling out your profile. Doing this may seem irrelevant but can go a long way to making technical work more relatable.

- -
    -
  • Add a profile picture of yourself or something fun.
  • -
  • Set your profile description to information which is both professionally accurate and unique to you.
  • -
  • Show or link to work which you feel may be relevant or exciting to those in your audience.
  • -
- -

Staying up to date with supported Python releases

- -
- - Major Python releases and their support status. - - -
- Major Python releases and their support status. - -
- -
- -

Use Python versions which are supported (this changes over time). -Python versions which are end-of-life may be difficult to support and are a sign of code decay for projects. Specify the version of Python which is compatiable with your project by using environment specifications such as pyproject.toml files and related packaging tools (more on this below).

- - - -

Security linting and visible checks with GitHub Actions

- -
- - Make an effort to inspect your package for known security issues. - - -
- Make an effort to inspect your package for known security issues. - -
- -
- -

Use security vulnerability linters to help prevent undesirable or risky processing for your audience. Doing this both practical to avoid issues and conveys that you care about those using your package!

- - - -
- - The green checkmark from successful GitHub Actions runs can offer a sense of reassurance to your audience. - - -
- The green checkmark from successful GitHub Actions runs can offer a sense of reassurance to your audience. - -
- -
- -

Combining GitHub actions with security linters and tests from your software validation suite can add an observable ✅ for your project. -This provides the audience with a sense that you’re transparently testing and sharing results of those tests.

- - - -

Connection: personal and inter-package relationships

- -
- - How does your package connect with other work and people? - - -
- How does your package connect with other work and people? - -
- -
- -

Understandability and trust set the stage for your project’s connection to other people and projects. What can we do to facilitate connection with our project? Use the following techniques to help enhance your project’s connection to others and their work.

- -

Acknowledging authors and referenced work with CITATION.cff

- -
- - figure image - - -
- -

Add a CITATION.cff file to your project root in order to describe project relationships and acknowledgements in a standardized way. The CFF format is also GitHub compatible, making it easier to cite your project.

- - - -

Reaching collaborators using CONTRIBUTING.md

- -
- - CONTRIBUTING.md documents can help you collaborate with others. - - -
- CONTRIBUTING.md documents can help you collaborate with others. - -
- -
- -

Provide a CONTRIBUTING.md file to your project root so as to make clear support details, development guidance, code of conduct, and overall documentation surrounding how the project is governed.

- - - -

Environment management reproducibility as connected project reality

- -
- - Environment and packaging managers can help you connect with your audience. - - -
- Environment and packaging managers can help you connect with your audience. - -
- -
- -

Code without an environment specification is difficult to run in a consistent way. This can lead to “works on my machine” scenarios where different things happen for different people, reducing the chance that people can connect with a shared reality for how your code should be used.

- -
-

“But why do we have to switch the way we do things?” -We’ve always been switching approaches (software approaches evolve over time)! A brief history of Python environment and packaging tooling:

- -
    -
  1. distutils, easy_install + setup.py
    (primarily used during 1990’s - early 2000’s)
  2. -
  3. pip, setup.py + requirements.txt
    (primarily used during late 2000’s - early 2010’s)
  4. -
  5. poetry + pyproject.toml
    (began use around late 2010’s - ongoing)
  6. -
-
- -

Using Python poetry for environment and packaging management

- -
- - figure image - - -
- -

Poetry is one Pythonic environment and packaging manager which can help increase reproducibility using pyproject.toml files. It’s one of many other alternatives such as hatch and pipenv.

- -
poetry directory structure template use
- -
user@machine % poetry new --name=package_name --src .
-Created package package_name in .
-
-user@machine % tree .
-.
-├── README.md
-├── pyproject.toml
-├── src
-│   └── package_name
-│       └── __init__.py
-└── tests
-    └── __init__.py
-
- -

After installation, Poetry gives us the ability to initialize a directory structure similar to what we presented earlier by using the poetry new ... command. If you’d like a more interactive version of the same, use the poetry init command to fill out various sections of your project with detailed information.

- -
poetry format for project pyproject.toml
- -
# pyproject.toml
-[tool.poetry]
-name = "package-name"
-version = "0.1.0"
-description = ""
-authors = ["username <email@address>"]
-readme = "README.md"
-packages = [{include = "package_name", from = "src"}]
-
-[tool.poetry.dependencies]
-python = "^3.9"
-
-[build-system]
-requires = ["poetry-core"]
-build-backend = "poetry.core.masonry.api"
-
- -

Using the poetry new ... command also initializes the content of our pyproject.toml file with opinionated details (following the recommendation from earlier in the article regarding declared Python version specification).

- -
poetry dependency management
- -
user@machine % poetry add pandas
-
-Creating virtualenv package-name-1STl06GY-py3.9 in /pypoetry/virtualenvs
-Using version ^2.1.0 for pandas
-
-...
-
-Writing lock file
-
- -

We can add dependencies directly using the poetry add ... command. This command also provides the possibility of using a group flag (for example poetry add pytest --group testing) to help organize and distinguish multiple sets of dependencies.

- -
    -
  • A local virtual environment is managed for us automatically.
  • -
  • A poetry.lock file is written when the dependencies are installed to help ensure the version you installed today will be what’s used on other machines.
  • -
  • The poetry.lock file helps ensure reproducibility when dealing with dependency version ranges (where otherwise we may end up using different versions which match the dependency ranges but observe different results).
  • -
- -
Running Python from the context of poetry environments
- -
% poetry run python -c "import pandas; print(pandas.__version__)"
-
-2.1.0
-
- -

We can invoke the virtual environment directly using the poetry run ... command.

- -
    -
  • This allows us to quickly run code through the context of the project’s environment.
  • -
  • Poetry can automatically switch between multiple environments based on the local directory structure.
  • -
  • We can also the environment as a “shell” (similar to virtualenv’s activate) with the poetry shell command which enables us to leverage a dynamic session in the context of the poetry environment.
  • -
- -
Building source code with poetry
- -
% pip install git+https://github.com/project/package_name
-
- -

Even if we don’t reach wider distribution on PyPI or elsewhere, source code managed by pyproject.toml and poetry can be used for “manual” distribution (with reproducible results) from GitHub repositories. When we’re ready to distribute pre-built packages on other networks we can also use the following:

- -
% poetry build
-
-Building package-name (0.1.0)
-  - Building sdist
-  - Built package_name-0.1.0.tar.gz
-  - Building wheel
-  - Built package_name-0.1.0-py3-none-any.whl
-
- -

Poetry readies source-code and pre-compiled versions of our code for distribution platforms like PyPI by using the poetry build ... command. We’ll cover more on these files and distribution steps with a later post!

]]>
dave-bunten
Tip of the Week: Using Python and Anaconda with the Alpine HPC Cluster2023-07-07T00:00:00+00:002024-01-25T20:55:52+00:00/set-website/preview/pr-29/2023/07/07/Using-Python-and-Anaconda-with-the-Alpine-HPC-ClusterTip of the Week: Using Python and Anaconda with the Alpine HPC Cluster - -
- - -
- -

Each week we seek to provide a software tip of the week geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out to -#software-engineering on Slack or email DBMISoftwareEngineering at olucdenver.onmicrosoft.com

- -
-
- - - -

This post is intended to help demonstrate the use of Python on Alpine, a High Performance Compute (HPC) cluster hosted by the University of Colorado Boulder’s Research Computing. -We use Python here by way of Anaconda environment management to run code on Alpine. -This readme will cover a background on the technologies and how to use the contents of an example project repository as though it were a project you were working on and wanting to run on Alpine.

- - - -

- -

Diagram showing a repository’s work as being processed on Alpine.

- -

Table of Contents

- -
    -
  1. Background: here we cover the background of Alpine and related technologies.
  2. -
  3. Implementation: in this section we use the contents of an example project repository on Alpine.
  4. -
- -

Background

- -

Why would I use Alpine?

- -

- -

Diagram showing common benefits of Alpine and HPC clusters.

- -

Alpine is a High Performance Compute (HPC) cluster. -HPC environments provide shared computer hardware resources like memory, CPU, GPU or others to run performance-intensive work. -Reasons for using Alpine might include:

- -
    -
  • Compute resources: Leveraging otherwise cost-prohibitive amounts of memory, CPU, GPU, etc. for processing data.
  • -
  • Long-running jobs: Completing long-running processes which may take hours or days to complete.
  • -
  • Collaborations: Sharing a single implementation environment for reproducibility within a group (avoiding “works on my machine” inconsistency issues).
  • -
- -

How does Alpine work?

- -

- -

Diagram showing high-level user workflow and Alpine components.

- -

Alpine’s compute resources are used through compute nodes in a system called Slurm. -Slurm is a system that a large number of users to run jobs on a cluster of computers; the system figures out how to use all the computers in the cluster to execute all the user’s jobs fairly (i.e., giving each user approximately equal time and resources on the cluster). A job is a request to run something, e.g. a bash script or a program, along with specifications about how much RAM and CPU it needs, how long it can run, and how it should be executed.

- -

Slurm’s role in general is to take in a job (submitted via the sbatch command) and put it into a queue (also called a “partition” in Slurm). For each job in the queue, Slurm constantly tries to find a computer in the cluster with enough resources to run that job, then when an available computer is found runs the program the job specifies on that computer. As the program runs, Slurm records its output to files and finally reports the program’s exit status (either completed or failed) back to the job manager.

- -

Importantly, jobs can either be marked as interactive or batch. When you submit an interactive job, sbatch will pause while waiting for the job to start and then connect you to the program, so you can see its output and enter commands in real time. On the other hand, a batch job will return immediately; you can see the progress of your job using squeue, and you can typically see the output of the job in the folder from which you ran sbatch unless you specify otherwise. -Data for or from Slurm work may be stored temporarily on local storage or on user-specific external (remote) storage.

- -
- - -
- -

Wait, what are “nodes”?

- -

A simplified way to understand the architecture of Slurm on Alpine is through login and compute “nodes” (computers). -Login nodes act as a place to prepare and submit jobs which will be completed on compute nodes. Login nodes are never used to execute Slurm jobs, whereas compute nodes are exclusively accessed via a job. -Login nodes have limited resource access and are not recommended for running procedures.

- -
-
- -

One can interact with Slurm on Alpine by use of Slurm interfaces and directives. -A quick way of accessing Alpine resources is through the use of the acompile command, which starts an interactive job on a compute node with some typical default parameters for the job. Since acompile requests very modest resources (1 hour and 1 CPU core at the time of writing), you’ll typically quickly be connected to a compute node. For more intensive or long-lived interactive jobs, consider using sinteractive, which allows for more customization: Interactive Jobs. -One can also access Slurm directly through various commands on Alpine.

- -

Many common software packages are available through the Modules package on Alpine (UCB RC documentation: The Modules System).

- -

How does Slurm work?

- -

- -

Diagram showing how Slurm generally works.

- -

Using Alpine effectively involves knowing how to leverage Slurm. -A simplified way to understand how Slurm works is through the following sequence. -Please note that some steps and additional complexity are omitted for the purposes of providing a basis of understanding.

- -
    -
  1. Create a job script: build a script which will configure and run procedures related to the work you seek to accomplish on the HPC cluster.
  2. -
  3. Submit job to Slurm: ask Slurm to run a set of commands or procedures.
  4. -
  5. Job queue: Slurm will queue the submitted job alongside others (recall that the HPC cluster is a shared resource), providing information about progress as time goes on.
  6. -
  7. Job processing: Slurm will run the procedures in the job script as scheduled.
  8. -
  9. Job completion or cancellation: submitted jobs eventually may reach completion or cancellation states with saved information inside Slurm regarding what happened.
  10. -
- -

How do I store data on Alpine?

- -

- -

Data used or produced by your processed jobs on Alpine may use a number of different data storage locations. -Be sure to follow the Acceptable data storage and use policies of Alpine, avoiding the use of certain sensitive information and other items. -These may be distinguished in two ways:

- -
    -
  1. -

    Alpine local storage (sometimes temporary): Alpine provides a number of temporary data storage locations for accomplishing your work. -⚠️ Note: some of these locations may be periodically purged and are not a suitable location for long-term data hosting (see here for more information)!
    -Storage locations available (see this link for full descriptions):

    - -
      -
    • Home filesystem: 2 GB of backed up space under /home/$USER (where $USER is your RMACC or Alpine username).
    • -
    • Projects filesystem: 250 GB of backed up space under /projects/$USER (where $USER is your RMACC or Alpine username).
    • -
    • Scratch filesystem: 10 TB (10,240 GB) of space which is not backed up under /scratch/alpine/$USER (where $USER is your RMACC or Alpine username).
    • -
    -
  2. -
  3. -

    External / remote storage: Users are encouraged to explore external data storage options for long-term hosting.
    -Examples may include the following:

    - - -
  4. -
- -

How do I send or receive data on Alpine?

- -

- -

Diagram showing external data storage being used to send or receive data on Alpine local storage.

- -

Data may be sent to or gathered from Alpine using a number of different methods. -These may vary contingent on the external data storage being referenced, the code involved, or your group’s available resources. -Please reference the following documentation from the University of Colorado Boulder’s Research Computing regarding data transfers: The Compute Environment - Data Transfer. -Please note: due to the authentication configuration of Alpine many local or SSH-key based methods are not available for CU Anschutz users. -As a result, Globus represents one of the best options available (see 3. 📂 Transfer data results below). While the Globus tutorial in this document describes how you can download data from Alpine to your computer, note that you can also use Globus to transfer data to Alpine from your computer.

- -

Implementation

- -

- -

Diagram showing how an example project repository may be used within Alpine through primary steps and processing workflow.

- -

Use the following steps to understand how Alpine may be used with an example project repository to run example Python code.

- -

0. 🔑 Gain Alpine access

- -

First you will need to gain access to Alpine. -This access is provided to members of the University of Colorado Anschutz through RMACC and is separate from other credentials which may be provided by default in your role. -Please see the following guide from the University of Colorado Boulder’s Research Computing covering requesting access and generally how this works for members of the University of Colorado Anschutz.

- - - -

1. 🛠️ Prepare code on Alpine

- -
[username@xsede.org@login-ciX ~]$ cd /projects/$USER
-[username@xsede.org@login-ciX username@xsede.org]$ git clone https://github.com/CU-DBMI/example-hpc-alpine-python
-Cloning into 'example-hpc-alpine-python'...
-... git output ...
-[username@xsede.org@login-ciX username@xsede.org]$ ls -l example-hpc-alpine-python
-... ls output ...
-
- -

An example of what this preparation section might look like in your Alpine terminal session.

- -

Next we will prepare our code within Alpine. -We do this to balance the fact that we may develop and source control code outside of Alpine. -In the case of this example work, we assume git as an interface for GitHub as the source control host.

- -

Below you’ll find the general steps associated with this process.

- -
    -
  1. Login to the Alpine command line (reference this guide).
  2. -
  3. Change directory into the Projects filesystem (generally we’ll assume processed data produced by this code are large enough to warrant the need for additional space):
    cd /projects/$USER
  4. -
  5. Use git (built into Alpine by default) commands to clone this repo:
    git clone https://github.com/CU-DBMI/example-hpc-alpine-python
  6. -
  7. Verify the contents were received as desired (this should show the contents of an example project repository):
    ls -l example-hpc-alpine-python
  8. -
- - - -

- -
- - -
- -

What if I need to authenticate with GitHub?

- -

There are times where you may need to authenticate with GitHub in order to accomplish your work. -From a GitHub perspective, you will want to use either GitHub Personal Access Tokens (PAT) (recommended by GitHub) or SSH keys associated with the git client on Alpine. -Note: if you are prompted for a username and password from git when accessing a GitHub resource, the password is now associated with other keys like PAT’s instead of your user’s password (reference). -See the following guide from GitHub for more information on how authentication through git to GitHub works:

- - - -
-
- -

2. ⚙️ Implement code on Alpine

- -
[username@xsede.org@login-ciX ~]$ sbatch --export=CSV_FILEPATH="/projects/$USER/example_data.csv" example-hpc-alpine-python/run_script.sh
-[username@xsede.org@login-ciX username@xsede.org]$ tail -f example-hpc-alpine-python.out
-... tail output (ctrl/cmd + c to cancel) ...
-[username@xsede.org@login-ciX username@xsede.org]$ head -n 2 example_data.csvexample-hpc-alpine-python
-... data output ...
-
- -

An example of what this implementation section might look like in your Alpine terminal session.

- -

After our code is available on Alpine we’re ready to run it using Slurm and related resources. -We use Anaconda to build a Python environment with specified packages for reproducibility. -The main goal of the Python code related to this work is to create a CSV file with random data at a specified location. -We’ll use Slurm’s sbatch command, which submits batch scripts to Slurm using various options.

- -
    -
  1. Use the sbatch command with exported variable CSV_FILEPATH.
    sbatch --export=CSV_FILEPATH="/projects/$USER/example_data.csv" example-hpc-alpine-python/run_script.sh
  2. -
  3. After a short moment, use the tail command to observe the log file created by Slurm for this sbatch submission. This file can help you understand where things are at and if anything went wrong.
    tail -f example-hpc-alpine-python.out
  4. -
  5. Once you see that the work has completed from the log file, take a look at the top 2 lines of the data file using the head command to verify the data arrived as expected (column names with random values):
    head -n 2 example_data.csv
  6. -
- -

3. 📂 Transfer data results

- -

- -

Diagram showing how example_data.csv may be transferred from Alpine to a local machine using Globus solutions.

- -

Now that the example data output from the Slurm work is available we need to transfer that data to a local system for further use. -In this example we’ll use Globus as a data transfer method from Alpine to our local machine. -Please note: always be sure to check data privacy and policy which change the methods or storage locations you may use for your data!

- -
    -
  1. Globus local machine configuration -
      -
    1. Install Globus Connect Personal on your local machine.
    2. -
    3. During installation, you will be prompted to login to Globus. Use your ACCESS credentials to login.
    4. -
    5. During installation login, note the label you provide to Globus. This will be used later, referenced as “Globus Connect Personal label”.
    6. -
    7. Ensure you add and (importantly:) provide write access to a local directory via Globus Connect Personal - Preferences - Access where you’d like the data to be received from Alpine to your local machine.

    8. -
    -
  2. -
  3. Globus web interface -
      -
    1. Use your ACCESS credentials to login to the Globus web interface.
    2. -
    3. Configure File Manager left side (source selection) -
        -
      1. Within the Globus web interface on the File Manager tab, use the Collection input box to search or select “CU Boulder Research Computing ACCESS”.
      2. -
      3. Within the Globus web interface on the File Manager tab, use the Path input box to enter: /projects/your_username_here/ (replacing “your_username_here” with your username from Alpine, including the “@” symbol if it applies).
      4. -
      -
    4. -
    5. Configure File Manager right side (destination selection) -
        -
      1. Within the Globus web interface on the File Manager tab, use the Collection input box to search or select the __Globus Connect Personal label you provided in earlier steps.
      2. -
      3. Within the Globus web interface on the File Manager tab, use the Path input box to enter the local path which you made accessible in earlier steps.
      4. -
      -
    6. -
    7. Begin Globus transfer -
        -
      1. Within the Globus web interface on the File Manager tab on the left side (source selection), check the box next to the file example_data.csv.
      2. -
      3. Within the Globus web interface on the File Manager tab on the left side (source selection), click the “Start ▶️” button to begin the transfer from Alpine to your local directory.
      4. -
      5. After clicking the “Start ▶️” button, you may see a message in the top right with the message “Transfer request submitted successfully”. You can click the link to view the details associated with the transfer.
      6. -
      7. After a short period, the file will be transferred and you should be able to verify the contents on your local machine.
      8. -
      -
    8. -
    -
  4. -
- -

Further References

- -]]>
dave-bunten
Tip of the Week: Automate Software Workflows with GitHub Actions2023-03-15T00:00:00+00:002024-01-25T20:55:52+00:00/set-website/preview/pr-29/2023/03/15/Automate-Software-Workflows-with-Github-ActionsTip of the Week: Automate Software Workflows with GitHub Actions - -
- - -
- -

Each week we seek to provide a software tip of the week geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out to -#software-engineering on Slack or email DBMISoftwareEngineering at olucdenver.onmicrosoft.com

- -
-
- - - -

There are many routine tasks which can be automated to help save time and increase reproducibility in software development. GitHub Actions provides one way to accomplish these tasks using code-based workflows and related workflow implementations. This type of automation is commonly used to perform tests, builds (preparing for the delivery of the code), or delivery itself (sending the code or related artifacts where they will be used).

- - - -

TLDR (too long, didn’t read); -Use GitHub Actions to perform continuous integration work automatically by leveraging Github’s workflow specification and the existing marketplace of already-created Actions. You can test these workflows with Act, which can enhance development with this feature of Github. Consider making use of “write once, run anywhere” (WORA) and Dagger in conjunction with GitHub Actions to enable reproducible workflows for your software projects.

- -

Workflows in Software

- -
-flowchart LR
-  start((start)) --> action
-  action["action(s)"] --> en((end))
-  style start fill:#6EE7B7
-  style en fill:#FCA5A5
-
- - -

An example workflow.

- -

Workflows consist of sequenced activities used by various systems. Software development workflows help accomplish work the same way each time by using what are commonly called “workflow engines”. Generally, workflow engines are provided code which indicate beginnings (what triggers a workflow to begin), actions (work being performed in sequence), and an ending (where the workflow stops). There are many workflow engines, including some which help accomplish work alongside version control.

- -

GitHub Actions

- -
-flowchart LR
-  subgraph workflow [GitHub Actions Workflow Run]
-    direction LR
-    action["action(s)"] --> en((end))
-    start((event\ntrigger))
-  end
-  start --> action
-  style start fill:#6EE7B7
-  style en fill:#FCA5A5
-
- -

A diagram showing GitHub Actions as a workflow.

- -

GitHub Actions is a feature of GitHub which allows you to run workflows in relation to your code as a continuous integration (including automated testing, builds, and deployments) and general automation tool. For example, one can use GitHub Actions to make sure code related to a GitHub Pull Request passes certain tests before it is allowed to be merged. GitHub Actions may be specified using YAML files within your repository’s .github/workflows directory by using syntax specific to Github’s workflow specification. Each YAML file under the .github/workflows directory can specify workflows to accomplish tasks related to your software work. GitHub Actions workflows may be customized to your own needs, or use an existing marketplace of already-created Actions.

- -
- - Image showing GitHub Actions tab on GitHub website. - - -
- Image showing GitHub Actions tab on GitHub website. - -
- -
- -

GitHub provides an “Actions” tab for each repository which helps visualize and control Github Actions workflow runs. This tab shows a history of all workflow runs in the repository. For each run, it shows whether it was run successful or not, the associated logs, and controls to cancel or re-run it.

- -
-

GitHub Actions Examples -GitHub Actions is sometimes better understood with examples. See the following references for a few basic examples of using GitHub Actions in a simulated project repository.

- - -
- -

Testing with Act

- -
-flowchart LR
-  subgraph container ["local simulation container(s)"]
-    direction LR
-    subgraph workflow [GitHub Actions Workflow Run]
-      direction LR
-      start((event\ntrigger))
-      action --> en((end))
-    end
-  end
-  start --> action
-  act\[Run Act] -.-> |Simulate\ntrigger| start
-  style start fill:#6EE7B7
-  style en fill:#FCA5A5
-
- -

A diagram showing how GitHub Actions workflows may be triggered from Act

- -

One challenge with GitHub Actions is a lack of standardized local testing tools. For example, how will you know that a new GitHub Actions workflow will function as expected (or at all) without pushing to the GitHub repository? One third-party tool which can help with this is Act. Act uses Docker images which require Docker Desktop to simulate what running a GitHub Action workflow within your local environment. Using Act can sometimes avoid guessing what will occur when a GitHub Action worklow is added to your repository. See Act’s installation documentation for more information on getting started with this tool.

- -

Nested Workflows with GitHub Actions

- -
-flowchart LR
-
-  subgraph action ["Nested Workflow (Dagger, etc)"]
-    direction LR
-    actions
-    start2((start)) --> actions
-    actions --> en2((end))
-    en2((end))
-  end
-  subgraph workflow2 [Local Environment Run]
-    direction LR
-    run2[run workflow]
-    en3((end))
-    start3((event\ntrigger))
-  end
-  subgraph workflow [GitHub Actions Workflow Run]
-    direction LR
-    start((event\ntrigger))
-    run[run workflow]
-    en((end))
-  end
-  
-  start --> run
-  start3 --> run2
-  action -.-> run
-  run --> en
-  run2 --> en3
-  action -.-> run2
-  style start fill:#6EE7B7
-  style start2 fill:#D1FAE5
-  style start3 fill:#6EE7B7
-  style en fill:#FCA5A5
-  style en2 fill:#FFE4E6
-  style en3 fill:#FCA5A5
-
- -

A diagram showing how GitHub Actions may leverage nested workflows with tools like Dagger.

- -

There are times when GitHub Actions may be too constricting or Act may not accurately simulate workflows. We also might seek to “write once, run anywhere” (WORA) to enable flexible development on many environments. One workaround to this challenge is to use nested workflows which are compatible with local environments and GitHub Actions environments. Dagger is one tool which enables programmatically specifying and using workflows this way. Using Dagger allows you to trigger workflows on your local machine or GitHub Actions with the same underlying engine, meaning there are fewer inconsistencies or guesswork for developers (see here for an explanation of how Dagger works).

- -

There are also other alternatives to Dagger you may want to consider based on your usecase, preference, or interest. Earthly is similar to Dagger and uses “earthfiles” as a specification. Both Dagger and Earthly (in addition to GitHub Actions) use container-based approaches, which in-and-of themselves present additional alternatives outside the scope of this article.

- -
-

GitHub Actions with Nested Workflow Example -Reference this example for a brief demonstration of how GitHub Actions and Dagger may be used together.

- - -
- -

Closing Remarks

- -

Using GitHub Actions through the above methods can help automate your technical work and increase the quality of your code with sometimes very little additional effort. Saving time through this form of automation can provide additional flexibility accomplish more complex work which requires your attention (perhaps using timeboxing techniques). Even small amounts of time saved can turn into large opportunities for other work. On this note, be sure to explore how GitHub Actions can improve things for your software endeavors.

]]>
dave-bunten
Tip of the Week: Branch, Review, and Learn2023-02-13T00:00:00+00:002024-01-25T20:55:52+00:00/set-website/preview/pr-29/2023/02/13/Branch-Review-and-LearnTip of the Week: Branch, Review, and Learn - -
- - -
- -

Each week we seek to provide a software tip of the week geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out to -#software-engineering on Slack or email DBMISoftwareEngineering at olucdenver.onmicrosoft.com

- -
-
- - - -

Git provides a feature called branching which facilitates parallel and segmented programming work through commits with version control. Using branching enables both work concurrency (multiple people working on the same repository at the same time) as well as a chance to isolate and review specific programming tasks. This article covers some conceptual best practices with branching, reviewing, and merging code using Github.

- - - -

Please note: the content below represents one opinion in a larger space of Git workflow concepts (it’s not perfect!). Developer cultures may vary on these topics; be sure to acknowledge people and culture over exclusive or absolute dedication to what is found below.

- -

TLDR (too long, didn’t read); -Use git branching techniques to segment the completion of programming tasks, gradually and consistently committing small changes (practicing festina lente or “make haste, slowly”). When a group of small changes are ready from branches, request pull request reviews and take advantage of comments to continuously improve the work. Prepare for a branch merge after review by deciding which merge strategy is appropriate and automating merge requirements with branch protection rules.

- -

Concept: Coursework Branching

- -
-flowchart LR
- subgraph Course
-    direction LR
-    open["open\nassignment"]
-    turn_in["review\nassignment"]
-  end
-  subgraph Student ["     Student"]
-    direction LR
-    work["completed\nassignment"]
-  end
-  open -.-> turn_in
-  open --> |works towards| work
-  work --> |seeks review| turn\_in
-
- - -

An example course and student assignment workflow.

- -

Git branching practices may be understood in context with similar workflows from real life. Consider a student taking a course, where an assignment is given to them to complete. In addition to the steps shown in the diagram above, it’s important to think about why this pattern is beneficial:

- -
    -
  • Completing an assignment allows us as social, inter-dependent beings to present new findings which enable learning and amalgamation of additional ideas from others.
  • -
  • The timebound nature of assignments enables us to practice some form of timeboxing so as to minimize tasks which may take too much time.
  • -
  • Segmenting applied learning in distinct, goal-orientated chunks helps make larger topics easier to understand.
  • -
- -

Branching to Complete an “Assignment”

- -
-%%{init: { 'logLevel': 'debug', 'theme': 'default' , 'themeVariables': {
-      'git0': '#4F46E5',
-      'git1': '#10B981',
-      'gitBranchLabel1': '#ffffff'
-} } }%%
-    gitGraph
-       commit id: "..."
-       commit id: "opened"
-       branch assignment
-       checkout assignment
-       commit id: "completed"
-       checkout main
-
- -

An example git diagram showing assignment branch based off main.

- -

Following the course assignment workflow, the diagram above shows an in-progress assignment branch based off of the main branch. When the assignment branch is created, we bring into it everything we know from main (the course) so far in the form of commits, or groups of changes to various files. Branching allows us to make consistent and well described changes based on what’s already happened without impacting others work in the meantime.

- -
-

Branching best practices:

- -
    -
  • Keep the name and work with branches dedicated to a specific and focused purpose. For example: a branch named fix-links-in-docs might entail work related to fixing HTTP links within documentation.
  • -
  • Consider the use of Github Forks (along with branches within the fork) to help further isolate and enrich work potential. Forks also allow remixing existing work into new possibilities.
  • -
  • festina lente or “make haste, slowly”: Commits on any branch represent small chunks of a cohesive idea which will eventually be brought to main. It is often beneficial to be consistent with small, gradual commits to avoid a rushed or incomplete submission. The same applies more generally for software; taking time upfront to do things well can mean time saved later.
  • -
-
- -

Reviewing the Branched Work

- -
-%%{init: { 'logLevel': 'debug', 'theme': 'default' , 'themeVariables': {
-      'git0': '#6366F1',
-      'git1': '#10B981',
-      'gitBranchLabel1': '#ffffff'
-} } }%%
-    gitGraph
-       commit id: "..."
-       commit id: "opened"
-       branch assignment
-       checkout assignment
-       commit id: "completed"
-       checkout main
-       merge assignment id: "reviewed"
-
- -

An example git diagram showing assignment branch being merged with main after a review.

- -

The diagram above depicts a merge from the assignment branch to pull the changes into the main branch, simulating an assignment being returned for review within a course. While merges may be forced without review, it’s a best practice create a Pull Request (PR) Review (also known as a Merge Request (MR) on some systems) and then ask other members of your team to review it. Doing this provides a chance to make revisions before code changes are “finalized” within the main branch.

- -
-

Github provides special tools for reviews which can assist both the author and reviewer:

- -
    -
  • Keep code changes intended for review small, enabling reviewers to reason through the work to more quickly provide feedback and practicing incremental continuous improvement (it may be difficult to address everything at once!). This also may denote the git history for a repository in a clearer way.
  • -
  • Github comments: Overall review comments (encompassing all work from the branch) and Inline comments (inquiring about individual lines of code) may be provided. Inline comments may also include code suggestions, which allows for code-based revision suggestions that may be committed directly to the branch using markdown codeblocks ( ``suggestion `).
  • -
  • Github issues: Creating issues from comments allows the creation of new repository issues to address topics outside of the current PR.
  • -
-
- -

Merging the Branch after Review

- -
-%%{init: { 'logLevel': 'debug', 'theme': 'default' , 'themeVariables': {
-      'git0': '#6366F1'
-} } }%%
-    gitGraph
-       commit id: "..."
-       commit id: "opened"
-       commit type: HIGHLIGHT id: "reviewed"
-       commit id: "...."
-
- -

An example git diagram showing the main branch after the assignment branch has been merged (and removed).

- -

Changes may be made within the assignment branch until the work is in a state where the authors and reviewers are satisfied. At this point, the branch changes may be merged into main. Approvals are sometimes provided informally (for ex., with a comment: “LGTM (looks good to me)!”) or explicitly (for ex., approvals within Github) to indicate or enable branch merge readiness . After the merge, changes may continue to be made in a similar way (perhaps accounting for concurrently branched work elsewhere). Generally, a merged branch may be removed afterwards to help maintain an organized working environment (see Github PR branch removal).

- -
-

Github provides special tools for merging:

- -
    -
  • Decide which merge strategy is appropriate (there are many!): There are many merge strategies within Github (merge commits, squash merges, and rebase merging). Take time to understand them and choose which one works best.
  • -
  • Consider using branch protection to automate merge requirements: The main or other branches may be “protected” against merges using branch protection rules. These rules can require reviewer approvals or automatic status checks to pass before changes may be merged.
  • -
  • Use merge queuing to manage multiple PR’s: When there are many unmerged PR’s, it can sometimes be difficult to document and ensure each are merged in a desired sequence. Consider using merge queues to help with this process.
  • -
-
- -

Additional Resources

- -

The links below may provide additional guidance on using these git features, including in-depth coverage of various features and related configuration.

- -]]>
dave-bunten
Tip of the Week: Software Linting with R2023-01-30T00:00:00+00:002024-01-25T20:55:52+00:00/set-website/preview/pr-29/2023/01/30/Software-Linting-with-RTip of the Week: Software Linting with R - -
- - -
- -

Each week we seek to provide a software tip of the week geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out to -#software-engineering on Slack or email DBMISoftwareEngineering at olucdenver.onmicrosoft.com

- -
-
- - - -

This article covers using the software technique of linting on R code in order to improve code quality, development velocity, and collaboration.

- - - -

TLDR (too long, didn’t read); -Use software linting (static analysis) practices on your R code with existing packages lintr and styler (among others). These linters may be applied using pre-commit in your local development environment or as continuous tests using for example Github Actions.

- -

Treating R as Software

- -
-

“Many users think of R as a statistics system. We prefer to think of it as an environment within which statistical techniques are implemented.”

-
- -

(R-Project: What is R?)

- -

The R programming language is sometimes treated as only a statistics system instead of software. This treatment can sometimes lead to common issues in development which are experienced in other languages. Addressing R as software enables developers to enhance their work by taking benefit from existing concepts applied to many other languages.

- -

Linting with R

- -
-flowchart LR
-  write\[Write R code] --> |check| check\[Check code with linters]
-  check --> |revise| write
-
- - -

Workflow loop depicting writing R code and revising with linters.

- -

Software linting, or static analysis, is one way to ensure a minimum level of code quality without writing new tests. Linting checks how your code is structured without running it to make sure it abides by common language paradigms and logical structures. Using linting tools allows a developer to gain quick insights about their code before it is viewed or used by others.

- -

One way to lint your R code is by using the lintr package. The lintr package is also complementary of the styler pacakge, which formats the syntax of R code in a consistent way. Both of these can be used independently or as part of continuous quality checks for R code repositories.

- -

Automated Linting Checks with R

- -
-flowchart LR
-  subgraph development
-    write
-    check
-  end
-  subgraph linters
-    direction LR
-    lintr
-    styler
-  end
-  check <-.- linters
-  write\[Write R code] --> |check| check\[Check code with pre-commit]
-  check --> |revise| write
-
- -

Workflow showing development with pre-commit using multiple linters.

- -

lintr and styler can be incorporated into automated checks to help make sure linting (or other steps) are always used with new code. One tool which can help with this is pre-commit, which acts as both a local development tool in addition to providing observability within source control (more on this later).

- -

Using pre-commit locally enables quick feedback loops using one or many checkers (such as lintr, styler, or others). Pre-commit may be used through the use of git hooks or manually using pre-commit run ... from a command-line. See this example of pre-commit checks with R for an example of multiple pre-commit checks for R code.

- -

Continuous and Observable Testing for R

- -
-flowchart LR
-  subgraph development [local development]
-    direction LR
-    write
-    check
-    commit
-  end
-  subgraph remote[Github repository]
-    direction LR
-    action["Check code (remotely)"]
-  end
-  write\[Write R code] --> |check| check\[Check code with pre-commit]
-  check --> |revise| write
-  check --> commit[commit + push]
-  commit --> |optional trigger| action
-  check -.-> |perform same checks| action
-
- -

Workflow showing pre-commit used as continuous testing tool with Github.

- -

Pre-commit linting checks can also be incorporated into continuous testing performed on your repository. One way to do this is using Github Actions. Github Actions provides a programmatic way to specify automatic steps taken as changes occur to a repository.

- -

Pre-commit provides an example Github Action which will automatically check and alert repository maintainers when code challenges are detected. Using pre-commit in this way allows R developers to ensure lintr checks are performed on any new work checked into a repository. This can have benefits towards decreasing pull request (PR) review time and standardize how code collaboration takes place for R developers.

- -

Resources

- -

Please see the following the resources on this topic.

- -]]>
dave-bunten
Tip of the Week: Timebox Your Software Work2023-01-17T00:00:00+00:002024-01-25T20:55:52+00:00/set-website/preview/pr-29/2023/01/17/Timebox-Your-Software-WorkTip of the Week: Timebox Your Software Work - -
- - -
- -

Each week we seek to provide a software tip of the week geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out to -#software-engineering on Slack or email DBMISoftwareEngineering at olucdenver.onmicrosoft.com

- -
-
- - - -

Programming often involves long periods of problem solving which can sometimes lead to unproductive or exhausting outcomes. This article covers one way to avoid less productive time expense or protect yourself from overexhaustion through a technique called “timeboxing” (also sometimes referenced as “timeblocking”).

- - - -

TLDR (too long, didn’t read); -Use timeboxing techniques such as Pomodoro® or 52/17 to help modularize your software work to ensure you don’t fall victim to Parkinson’s Law. Timeboxing may also map well to Github Issues, which allows your software tasks to be further aligned, documented, and chunked in collaboration with others.

- -

Controlling Work Time Expansion

- -
- - Image depicting work as a creature with a timebox around it. - - -
- Image depicting work as a creature with a timebox around it. - -
- -
- -

Have you ever spent more time than you thought you would on a task? An adage which helps explain this phenomenon is Parkinson’s Law:

- -
-

“… work expands so as to fill the time available for its completion.”

-
- -

The practice of writing software is not protected from this “law”. It may be affected by it in sometimes worse ways during long periods of uninterrupted programming where we may have an inclination to forget productive goals.

- -

One way to address this is through the use of timeboxing techiques. Timeboxing sets a fixed limit to the amount of time one may spend on a specific activity. One can use timeboxing to systematically address many tasks, for example, as with the Pomodoro® Technique (developed by Francesco Cirillo) or 52/17 rule. While there are many ways to apply timeboxing, make sure to balance activity with short breaks and focus switches to help ensure we don’t become overwhelmed.

- -

Timeboxing Means Modularization

- -

Timeboxing has an auxiliary benefit of framing your work as objective and oftentimes smaller chunks (we have to know what we’re timeboxing in order to use this technique). Creating distinct chunks of work applies for both our daily time schedule as well as code itself. This concept is more broadly called “modularization” and helps to distinguish large portions of work (whether in real life or in code) as smaller and more maintainable chunks.

- -
- - -
-
# Goals
-- Finish writing paper
-
-
-
-
-
- -

Vague and possibly large task

- -
- - -
-
# Goals
-- Finish writing paper
-  - Create paper outline
-  - Finish writing introduction
-  - Check for dead hyperlinks
-  - Request internal review
-
- -

Modular and more understandable tasks

-
- -
- -

Breaking down large amounts of work as smaller chunks within our code helps to ensure long-term maintainability and understandability. Similarly, keeping our tasks small can help ensure our goals are achievable and understandable (to ourselves or others). Without this modularity, tasks can be impossible to achieve (subjective in nature) or very difficult to understand. Stated differently, taking many small steps can lead to a big change in an organized, oftentimes less exhausting way (related graphic).

- -

Version Control and Timeboxing

- -
# Repo Issues
-- "Prevent foo warning" - 20 minutes
-- "Remove bar feature" - 20 minutes
-- "Address baz error" - 20 minutes
-
-
- -

List of example version control repository issues with associated time duration.

- -

The parallels between the time we give a task and related code can work towards your benefit. For example, Github Issues can be created to outline a timeboxed task which relates to a distinct chunk of code to be created, updated, or fixed. Once development tasks have been outlined as issues, a developer can use timeboxing to help organize how much time to allocate on each issue.

- -

Using Github Issues in this way provides a way to observe task progress associated with one or many repositories. It also increases collaborative opportunities for task sizing and description. For example, if a task looks too large to complete in a reasonable amount of time, developers may work together to break the task down into smaller modules of work.

- -

Be Kind to Yourself: Take Breaks

- -

While timeboxing is often a conversation about how to be more productive, it’s also worth remembering: take breaks to be kind to yourself and more effective. Some studies and thought leadership have shown that taking breaks may be necessary to avoid performance decreases and impacts to your health. There’s also some indication that taking breaks may lead to better work. See below for just a few examples:

- - - -

Additional Resources

- -]]>
dave-bunten
Tip of the Week: Linting Documentation as Code2023-01-03T00:00:00+00:002024-01-25T20:55:52+00:00/set-website/preview/pr-29/2023/01/03/Linting-Documentation-as-CodeTip of the Week: Linting Documentation as Code - -
- - -
- -

Each week we seek to provide a software tip of the week geared towards helping you achieve your software goals. Views -expressed in the content belong to the content creators and not the organization, its affiliates, or employees. If you -have any software questions or suggestions for an upcoming tip of the week, please don’t hesitate to reach out to -#software-engineering on Slack or email DBMISoftwareEngineering at olucdenver.onmicrosoft.com

- -
-
- - - -

Software documentation is sometimes treated as a less important or secondary aspect of software development. Treating documentation as code allows developers to version control the shared understanding and knowledge surrounding a project. Leveraging this paradigm also enables the use of tools and patterns which have been used to strengthen code maintenance. This article covers one such pattern: linting, or static analysis, for documentation treated like code.

- - - -

TLDR (too long, didn’t read); -There are many linting tools available which enable quick revision of your documentation. Try using codespell for spelling corrections, mdformat for markdown file formatting corrections, and vale for more complex editorial style or natural language assessment within your documentation.

- -

Spelling Checks

- -
- - -
-
<!--- readme.md --->
-## Example Readme
-
-Thsi project is a wokr in progess.
-Code will be updated by the team very often.
-
-(CU Anschutz)[https://www.cuanschutz.edu/]
-
- -

Example readme.md with incorrectly spelled words

-
- - -
-
% codespell readme.md
-readme.md:4: Thsi ==> This
-readme.md:4: wokr ==> work
-readme.md:4: progess ==> progress
-
-
-
-
- -

Example showing codespell detection of mispelled words

-
- -
- -

Spelling checks may be used to automatically detect incorrect spellings of words within your documentation (and code!). Codespell is one library which can lint your word spelling. Codespell may be used through the command-line and also through a pre-commit hook.

- -

Markdown Format Linting

- -
- - -
-
<!--- readme.md --->
-## Example Readme
-
-This project is a work in progress.
-Code will be updated by the team very often.
-
-(CU Anschutz)[https://www.cuanschutz.edu/]
-
- -

Example readme.md with markdown issues

-
- - -
-
% markdownlint readme.md
-readme.md:2 MD041/first-line-heading/first-line-h1
-First line in a file should be a top-level heading
-[Context: "## Example Readme"]
-readme.md:6:5 MD011/no-reversed-links Reversed link
-syntax [(link)[https://www.cuanschutz.edu/]]
-
-
- -

Example showing markdownlint detection of issues

-
- -
- -

The format of your documentation files may also be linted for common issues. This may catch things which are otherwise hard to see when editing content. It may also improve the overall web accessibility of your content, for example, through proper HTML header order and image alternate text. Markdownlint is one library which can be used to find issues within markdown files.

- -

Additional and similar resources to explore in this area:

- - - -

Editorial Style and Grammar

- -
- - -
-
<!--- readme.md --->
-# Example Readme
-
-This project is a work in progress.
-Code will be updated by the team very often.
-
-[CU Anschutz](https://www.cuanschutz.edu/)
-
- -

Example readme.md with questionable editorial style

-
- - -
-
% vale readme-example.md
-readme-example.md
-2:12  error    Did you really mean 'Readme'?   Vale.Spelling
-5:11  warning  'be updated' may be passive     write-good.Passive
-               voice. Use active voice if you
-               can.
-5:34  warning  'very' is a weasel word!        write-good.Weasel
-
- -

Example showing vale warnings and errors

-
- -
- -

Maintaining consistent editorial style and grammar may also be a focus within your documentation. These issues are sometimes more difficult to detect and more opinionated in nature. In some cases, organizations publish guides on this topic (see Microsoft Writing Style Guide, or Google Developer Documenation Style Guide). Some of the complexity of writing style may be linted through tools like Vale. Using common configurations through Vale can unify how language is used within your documentation by linting for common style and grammar.

- -

Additional and similar resources to explore in this area:

- -
    -
  • textlint - similar to Vale with a modular approach
  • -
- -

Resources

- -

Please see the following the resources on this topic.

- -]]>
dave-bunten
\ No newline at end of file diff --git a/preview/pr-29/images/French_Orchard_at_Harvest_Time_(Le_verger)_(SM_1444)_cropped.png b/preview/pr-29/images/French_Orchard_at_Harvest_Time_(Le_verger)_(SM_1444)_cropped.png deleted file mode 100644 index 694c0e94fe..0000000000 Binary files a/preview/pr-29/images/French_Orchard_at_Harvest_Time_(Le_verger)_(SM_1444)_cropped.png and /dev/null differ diff --git a/preview/pr-29/images/ahsb.jpg b/preview/pr-29/images/ahsb.jpg deleted file mode 100644 index 97daaf57be..0000000000 Binary files a/preview/pr-29/images/ahsb.jpg and /dev/null differ diff --git a/preview/pr-29/images/anschutz.jpg b/preview/pr-29/images/anschutz.jpg deleted file mode 100644 index c060d70134..0000000000 Binary files a/preview/pr-29/images/anschutz.jpg and /dev/null differ diff --git a/preview/pr-29/images/background.jpg b/preview/pr-29/images/background.jpg deleted file mode 100644 index 734a98674b..0000000000 Binary files a/preview/pr-29/images/background.jpg and /dev/null differ diff --git a/preview/pr-29/images/citation-cff-icon.png b/preview/pr-29/images/citation-cff-icon.png deleted file mode 100644 index a6d6008b34..0000000000 Binary files a/preview/pr-29/images/citation-cff-icon.png and /dev/null differ diff --git a/preview/pr-29/images/code.jpg b/preview/pr-29/images/code.jpg deleted file mode 100644 index 63cd41c811..0000000000 Binary files a/preview/pr-29/images/code.jpg and /dev/null differ diff --git a/preview/pr-29/images/contributing-file-with-handshake.png b/preview/pr-29/images/contributing-file-with-handshake.png deleted file mode 100644 index febcdc51d1..0000000000 Binary files a/preview/pr-29/images/contributing-file-with-handshake.png and /dev/null differ diff --git a/preview/pr-29/images/dave-bunten.jpg b/preview/pr-29/images/dave-bunten.jpg deleted file mode 100644 index b120f47f2c..0000000000 Binary files a/preview/pr-29/images/dave-bunten.jpg and /dev/null differ diff --git a/preview/pr-29/images/duckdb_arrow_query_example.png b/preview/pr-29/images/duckdb_arrow_query_example.png deleted file mode 100644 index 925fb086dc..0000000000 Binary files a/preview/pr-29/images/duckdb_arrow_query_example.png and /dev/null differ diff --git a/preview/pr-29/images/environment-management-tooling.png b/preview/pr-29/images/environment-management-tooling.png deleted file mode 100644 index 5f8dce879a..0000000000 Binary files a/preview/pr-29/images/environment-management-tooling.png and /dev/null differ diff --git a/preview/pr-29/images/faisal-alquaddoomi.jpg b/preview/pr-29/images/faisal-alquaddoomi.jpg deleted file mode 100644 index 5f9dfeef23..0000000000 Binary files a/preview/pr-29/images/faisal-alquaddoomi.jpg and /dev/null differ diff --git a/preview/pr-29/images/fallback.svg b/preview/pr-29/images/fallback.svg deleted file mode 100644 index ac12be23a2..0000000000 --- a/preview/pr-29/images/fallback.svg +++ /dev/null @@ -1,10 +0,0 @@ - - - - - - diff --git a/preview/pr-29/images/gh-actions-checkmark.png b/preview/pr-29/images/gh-actions-checkmark.png deleted file mode 100644 index 5a939cefc4..0000000000 Binary files a/preview/pr-29/images/gh-actions-checkmark.png and /dev/null differ diff --git a/preview/pr-29/images/github_actions_tab.png b/preview/pr-29/images/github_actions_tab.png deleted file mode 100644 index 523fc0fa9b..0000000000 Binary files a/preview/pr-29/images/github_actions_tab.png and /dev/null differ diff --git a/preview/pr-29/images/github_mermaid_code.png b/preview/pr-29/images/github_mermaid_code.png deleted file mode 100644 index 09f8b34a7c..0000000000 Binary files a/preview/pr-29/images/github_mermaid_code.png and /dev/null differ diff --git a/preview/pr-29/images/github_mermaid_preview.png b/preview/pr-29/images/github_mermaid_preview.png deleted file mode 100644 index 9fd7967188..0000000000 Binary files a/preview/pr-29/images/github_mermaid_preview.png and /dev/null differ diff --git a/preview/pr-29/images/graphdb-deployer.jpg b/preview/pr-29/images/graphdb-deployer.jpg deleted file mode 100644 index ef8fff9e47..0000000000 Binary files a/preview/pr-29/images/graphdb-deployer.jpg and /dev/null differ diff --git a/preview/pr-29/images/greene-lab.jpg b/preview/pr-29/images/greene-lab.jpg deleted file mode 100644 index 06a2a9f4a7..0000000000 Binary files a/preview/pr-29/images/greene-lab.jpg and /dev/null differ diff --git a/preview/pr-29/images/icon.png b/preview/pr-29/images/icon.png deleted file mode 100644 index 9e0e98cb7a..0000000000 Binary files a/preview/pr-29/images/icon.png and /dev/null differ diff --git a/preview/pr-29/images/jupyter_mermaid_example.png b/preview/pr-29/images/jupyter_mermaid_example.png deleted file mode 100644 index 28b4f4bff9..0000000000 Binary files a/preview/pr-29/images/jupyter_mermaid_example.png and /dev/null differ diff --git a/preview/pr-29/images/logo.svg b/preview/pr-29/images/logo.svg deleted file mode 100644 index 703e8b5094..0000000000 --- a/preview/pr-29/images/logo.svg +++ /dev/null @@ -1,50 +0,0 @@ - - - - - - - - - - - - - - diff --git a/preview/pr-29/images/manubot.jpg b/preview/pr-29/images/manubot.jpg deleted file mode 100644 index 30c49c5357..0000000000 Binary files a/preview/pr-29/images/manubot.jpg and /dev/null differ diff --git a/preview/pr-29/images/memray-flamegraph.png b/preview/pr-29/images/memray-flamegraph.png deleted file mode 100644 index 3039b9ed6b..0000000000 Binary files a/preview/pr-29/images/memray-flamegraph.png and /dev/null differ diff --git a/preview/pr-29/images/molevolvr.png b/preview/pr-29/images/molevolvr.png deleted file mode 100644 index 63c9b241a8..0000000000 Binary files a/preview/pr-29/images/molevolvr.png and /dev/null differ diff --git a/preview/pr-29/images/monarch-cluster.jpg b/preview/pr-29/images/monarch-cluster.jpg deleted file mode 100644 index 063e7caceb..0000000000 Binary files a/preview/pr-29/images/monarch-cluster.jpg and /dev/null differ diff --git a/preview/pr-29/images/monarch-ui.jpg b/preview/pr-29/images/monarch-ui.jpg deleted file mode 100644 index 21a201f258..0000000000 Binary files a/preview/pr-29/images/monarch-ui.jpg and /dev/null differ diff --git a/preview/pr-29/images/package-audience-trust.png b/preview/pr-29/images/package-audience-trust.png deleted file mode 100644 index 84d015a490..0000000000 Binary files a/preview/pr-29/images/package-audience-trust.png and /dev/null differ diff --git a/preview/pr-29/images/package-connections.png b/preview/pr-29/images/package-connections.png deleted file mode 100644 index 5904d42e9f..0000000000 Binary files a/preview/pr-29/images/package-connections.png and /dev/null differ diff --git a/preview/pr-29/images/package-magnifying-glass.png b/preview/pr-29/images/package-magnifying-glass.png deleted file mode 100644 index 6e955e9589..0000000000 Binary files a/preview/pr-29/images/package-magnifying-glass.png and /dev/null differ diff --git a/preview/pr-29/images/poetry-icon.png b/preview/pr-29/images/poetry-icon.png deleted file mode 100644 index acb109846f..0000000000 Binary files a/preview/pr-29/images/poetry-icon.png and /dev/null differ diff --git a/preview/pr-29/images/python-packaging-to-audience.png b/preview/pr-29/images/python-packaging-to-audience.png deleted file mode 100644 index baef10ee3d..0000000000 Binary files a/preview/pr-29/images/python-packaging-to-audience.png and /dev/null differ diff --git a/preview/pr-29/images/python-version-status.png b/preview/pr-29/images/python-version-status.png deleted file mode 100644 index 626c544f78..0000000000 Binary files a/preview/pr-29/images/python-version-status.png and /dev/null differ diff --git a/preview/pr-29/images/scalene-web-interface.png b/preview/pr-29/images/scalene-web-interface.png deleted file mode 100644 index ad9c13d5d7..0000000000 Binary files a/preview/pr-29/images/scalene-web-interface.png and /dev/null differ diff --git a/preview/pr-29/images/share.jpg b/preview/pr-29/images/share.jpg deleted file mode 100644 index 065afb50fe..0000000000 Binary files a/preview/pr-29/images/share.jpg and /dev/null differ diff --git a/preview/pr-29/images/source-control-authenticity.png b/preview/pr-29/images/source-control-authenticity.png deleted file mode 100644 index 3c49250af9..0000000000 Binary files a/preview/pr-29/images/source-control-authenticity.png and /dev/null differ diff --git a/preview/pr-29/images/text-vs-book.png b/preview/pr-29/images/text-vs-book.png deleted file mode 100644 index a61efc23e4..0000000000 Binary files a/preview/pr-29/images/text-vs-book.png and /dev/null differ diff --git a/preview/pr-29/images/tis-lab.jpg b/preview/pr-29/images/tis-lab.jpg deleted file mode 100644 index 0d21959827..0000000000 Binary files a/preview/pr-29/images/tis-lab.jpg and /dev/null differ diff --git a/preview/pr-29/images/vincent-rubinetti.jpg b/preview/pr-29/images/vincent-rubinetti.jpg deleted file mode 100644 index fe128fd420..0000000000 Binary files a/preview/pr-29/images/vincent-rubinetti.jpg and /dev/null differ diff --git a/preview/pr-29/images/way-lab.jpg b/preview/pr-29/images/way-lab.jpg deleted file mode 100644 index 1136aee420..0000000000 Binary files a/preview/pr-29/images/way-lab.jpg and /dev/null differ diff --git a/preview/pr-29/images/work_timebox.png b/preview/pr-29/images/work_timebox.png deleted file mode 100644 index 4043712642..0000000000 Binary files a/preview/pr-29/images/work_timebox.png and /dev/null differ diff --git a/preview/pr-29/index.html b/preview/pr-29/index.html deleted file mode 100644 index a7a022c2fe..0000000000 --- a/preview/pr-29/index.html +++ /dev/null @@ -1,579 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Software Engineering Team - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - Software Engineering Team - - - CU Dept. of Biomedical Informatics - - - - - - - - -
- -
- - - - - - - - - - - - - -
-
- - Who we are - -
- -

Who we are

- - -

We are a small group of dedicated software developers with the Department of Biomedical Informatics at the University of Colorado Anschutz.

- - - - -
-
-
- - - - - -
- - -
- - What we do - -
- -

What we do

- - -

We support the labs and individuals within the Department by developing high quality web applications, web servers, data visualizations, data pipelines, and much more.

- - - - -
-
-
- - -
- - - - - - - diff --git a/preview/pr-29/members/dave-bunten.html b/preview/pr-29/members/dave-bunten.html deleted file mode 100644 index 4b97252cda..0000000000 --- a/preview/pr-29/members/dave-bunten.html +++ /dev/null @@ -1,572 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Dave Bunten (@d33bs) | Software Engineering Team - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - Software Engineering Team - - - CU Dept. of Biomedical Informatics - - - - - - - - -
- -
- - - - - - - - - - - - - -
-
- - -
- - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- -
- - -
-

Dave Bunten is a multiskilled research data engineer with a passion for expanding human potential through software design, collaboration, and innovation. -He brings a diverse background in higher education, healthcare, and software development to help orchestrate scientific data pipelines. -Outside of work, Dave enjoys hiking, biking, painting, and spending time with family.

- -

- See Dave Bunten (@d33bs)’s papers on the Research page -

- - - -
- -
-
- - -
- - - - - - - diff --git a/preview/pr-29/members/faisal-alquaddoomi.html b/preview/pr-29/members/faisal-alquaddoomi.html deleted file mode 100644 index 7676ca085f..0000000000 --- a/preview/pr-29/members/faisal-alquaddoomi.html +++ /dev/null @@ -1,572 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Faisal Alquaddoomi (@falquaddoomi) | Software Engineering Team - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - Software Engineering Team - - - CU Dept. of Biomedical Informatics - - - - - - - - -
- -
- - - - - - - - - - - - - -
-
- - -
- - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- -
- - -
-

Faisal has been working as a full-stack developer for the past fifteen years. He was the lead developer on svip.ch (the Swiss Variant Interpretation Platform), a variant database with a curation interface. He has also worked with the BRCA Challenge on BRCA Exchange as a mobile, web, and backend/pipeline developer.

- -

Since starting at the University of Colorado Anschutz in July 2021, he has been primarily engaged in porting applications to Google Cloud, including profiling apps for their resource requirements, writing IaC descriptions of the application stacks, and adding instrumentation.

- -

- See Faisal Alquaddoomi (@falquaddoomi)’s papers on the Research page -

- - - -
- -
-
- - -
- - - - - - - diff --git a/preview/pr-29/members/vincent-rubinetti.html b/preview/pr-29/members/vincent-rubinetti.html deleted file mode 100644 index 0c5eb1687d..0000000000 --- a/preview/pr-29/members/vincent-rubinetti.html +++ /dev/null @@ -1,607 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Vincent Rubinetti (@vincerubinetti) | Software Engineering Team - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - Software Engineering Team - - - CU Dept. of Biomedical Informatics - - - - - - - - -
- -
- - - - - - - - - - - - - -
-
- - -
- - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- -
- - -
-

Vince is a staff frontend developer in the Department. -His job is to take the studies, projects, and ideas of his colleagues and turn them into beautiful, dynamic, fully-realized web applications. -His work includes app development, website development, UI/UX design, logo design, and anything else visual or creative. -Outside of the lab, Vince is a freelance music composer for indie video games and the YouTube channel 3Blue1Brown.

- -

- See Vincent Rubinetti (@vincerubinetti)’s papers on the Research page -

- - - -
- -
-
- - -
- - - - - - - diff --git a/preview/pr-29/portfolio/index.html b/preview/pr-29/portfolio/index.html deleted file mode 100644 index a77922c4a6..0000000000 --- a/preview/pr-29/portfolio/index.html +++ /dev/null @@ -1,1050 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Portfolio | Software Engineering Team - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - Software Engineering Team - - - CU Dept. of Biomedical Informatics - - - - - - - - -
- -
- - - - - - - - - - - - - -
-

Portfolio

- -

We work with many groups both within and external to the University of Colorado:

- - - -

Projects

- -
- -


- -
- - MolEvolvR - - -
- - - MolEvolvR - - - - - for JRaviLab - - - -

A web app that enables researchers to run a general-purpose computational workflow for -characterizing the molecular evolution and phylogeny of their proteins of interest.

- - - - - - - -
- - - server - - -
- - - -
-
- -
- - Pycytominer - - -
- - - Pycytominer - - - - - for the Way Lab - - - -

A suite of common functions used to process high dimensional readouts from high-throughput cell experiments.

- - - - - - - - - - - -
-
- -
- - CytoTable - - -
- - - CytoTable - - - - - for the Way Lab - - - -

A Python package which enables large data processing to enhance single-cell morphology data analysis.

- - - - - - - - - - - -
-
- -
- - Simplex - - -
- - - Simplex - - - - - for the Krishnan Lab - - - -

A web app and supporting backend for simplifying scientific and medical writing.

- - - - - - - - - - - -
-
- -
- - MyGeneset.info - - -
- - - MyGeneset.info - - - - - for BioThings.io - - - -

A web app built from scratch to allow users to collect, save, and share sets of genes. A geneSET companion to MyGene.info.

- - - - - - - - - - - -
-
- -
- - Word Lapse - - -
- - - Word Lapse - - - - - for the Greene Lab - - - -

A frontend web app and supporting backend server that allows users to explore how a word changes in meaning over time based on natural language processing machine learning.

- - - - - - - - - - - -
-
- -
- - Monarch Initiative UI - - -
- - - Monarch Initiative UI - - - - - for TISLab - - - -

A redesign and rewrite of the Monarch Initiative application from the ground up, designed to be more modern, maintainable, robust, and accessible.

- - - - - - - - - - - -
-
- -
- - Monarch Initiative Cloud Migration - - -
- - - Monarch Initiative Cloud Migration - - - - - for TISLab - - - -

A migration of the all Monarch Initiative backend and associated services from physical hardware to Google Cloud, including automated provisioning and deployment via Terraform, Ansible, and Docker Swarm.

- - - - - - - - - - - -
-
- -
- - GraphDB Deployer - - -
- - - GraphDB Deployer - - - - - - -

Automates the parsing and transformation of a KGX archive into graphdb-ready formats. After the archive is converted, automates provisioning and deployment of Neo4j and Blazegraph instances from a KGX archive.

- - - - - - - - - - - -
-
- -
- - Lab Website Template - - -
- - - Lab Website Template - - - - - for the Greene Lab - - - -

An easy-to-use, flexible website template for labs. What this very site is built on!

- - - - - - - - - - - -
-
-
- - -
- - - - - - - diff --git a/preview/pr-29/redirects.json b/preview/pr-29/redirects.json deleted file mode 100644 index 9e26dfeeb6..0000000000 --- a/preview/pr-29/redirects.json +++ /dev/null @@ -1 +0,0 @@ -{} \ No newline at end of file diff --git a/preview/pr-29/robots.txt b/preview/pr-29/robots.txt deleted file mode 100644 index dd2f06eeff..0000000000 --- a/preview/pr-29/robots.txt +++ /dev/null @@ -1 +0,0 @@ -Sitemap: /set-website/preview/pr-29/sitemap.xml diff --git a/preview/pr-29/sitemap.xml b/preview/pr-29/sitemap.xml deleted file mode 100644 index b85ab60e4c..0000000000 --- a/preview/pr-29/sitemap.xml +++ /dev/null @@ -1,87 +0,0 @@ - - - -/set-website/preview/pr-29/members/dave-bunten.html -2024-01-25T20:55:52+00:00 - - -/set-website/preview/pr-29/members/faisal-alquaddoomi.html -2024-01-25T20:55:52+00:00 - - -/set-website/preview/pr-29/members/vincent-rubinetti.html -2024-01-25T20:55:52+00:00 - - -/set-website/preview/pr-29/2022/10/17/Use-Linting-Tools-to-Save-Time.html -2024-01-25T20:55:52+00:00 - - -/set-website/preview/pr-29/2022/11/27/Diagrams-as-Code.html -2024-01-25T20:55:52+00:00 - - -/set-website/preview/pr-29/2022/12/05/Data-Engineering-with-SQL-Arrow-and-DuckDB.html -2024-01-25T20:55:52+00:00 - - -/set-website/preview/pr-29/2022/12/12/Remove-Unused-Code-to-Avoid-Decay.html -2024-01-25T20:55:52+00:00 - - -/set-website/preview/pr-29/2023/01/03/Linting-Documentation-as-Code.html -2024-01-25T20:55:52+00:00 - - -/set-website/preview/pr-29/2023/01/17/Timebox-Your-Software-Work.html -2024-01-25T20:55:52+00:00 - - -/set-website/preview/pr-29/2023/01/30/Software-Linting-with-R.html -2024-01-25T20:55:52+00:00 - - -/set-website/preview/pr-29/2023/02/13/Branch-Review-and-Learn.html -2024-01-25T20:55:52+00:00 - - -/set-website/preview/pr-29/2023/03/15/Automate-Software-Workflows-with-Github-Actions.html -2024-01-25T20:55:52+00:00 - - -/set-website/preview/pr-29/2023/07/07/Using-Python-and-Anaconda-with-the-Alpine-HPC-Cluster.html -2024-01-25T20:55:52+00:00 - - -/set-website/preview/pr-29/2023/09/05/Python-Packaging-as-Publishing.html -2024-01-25T20:55:52+00:00 - - -/set-website/preview/pr-29/2023/10/04/Data-Quality-Validation.html -2024-01-25T20:55:52+00:00 - - -/set-website/preview/pr-29/2023/11/15/Codesgiving-Open-source-Contribution-Walkthrough.html -2024-01-25T20:55:52+00:00 - - -/set-website/preview/pr-29/2024/01/22/Python-Memory-Management-and-Troubleshooting.html -2024-01-25T20:55:52+00:00 - - -/set-website/preview/pr-29/blog/ -2024-01-25T20:55:52+00:00 - - -/set-website/preview/pr-29/about/ -2024-01-25T20:55:52+00:00 - - -/set-website/preview/pr-29/portfolio/ -2024-01-25T20:55:52+00:00 - - -/set-website/preview/pr-29/ -2024-01-25T20:55:52+00:00 - -