This is a direct fork of ActiveLamp/12-factor-demo; with the intent of attempting to bring it up to date as a Drupal repository for Docker with a Docker-Sync component facilitating local site development. In addition to moving to Drupal 8.9.X for the basic install, watch to see if progress is made to add a much richer Composer.Json file to 'Base' so that a user has a real jump start on a configured Drupal install with some key contributed modules. The other intent is to work with the Docker-compose files to add some development tools into the container so that a new developer has some perspective on moving back and forth between their local harddrive and the container. That way one can see the changes make locally and how they are 'taking' in the container. The Docker-Sync capability is set to make sure that is happening automatically behind the scenes but I find it valuable to 'peek' inside the container when I am trying to debug any bumps (https://docker-sync.readthedocs.io/en/latest/index.html). That probably has two general values; 1) just learning and 2) trying to do a 'move' rather than a clean new site install and build. In this later situation you might want to check #hash, autoload builds, composer install .json and .lock results, settings.php, local.settings.php, etc. to assure the prior and new container environments have working cross matches.
The original ActiveLamp repository was contributed by Tom Friedhof. It takes its name as a 12-factor-demo based upon Heroku putting out a 12-factor recommendation for software as a service best practices. Mr. Friedhof provides an explaination of the factors as his original repository in a series of YouTube videos (www.youtube.com/watch?v=FiaLKwdv9TI and https://www.youtube.com/watch?v=BhdSn6XlmWo). One of the most critical points is having a Development, Staging-Testing, and Production environment to how you work. Being able to use the Docker-Sync in the Development environment so it 'speaks' to your container is a great starting point in what he has provided. As noted above, I believe adding some development tools for use within the container has some further value. However, that said, I believe in full alignment with Mr. Friedhof's point of keeping three separate environments and that NOT having the extra 'tools' in the Production environment makes perfect sense. So as progress is made on this respository, one might expect core container production elements in a common Docker container definition applicable to all three environments and the inclusion of Development tools only in the Development environment, and the additional of the Testing Tools only in the Staging-Tesing environment. All this leads toward an approach to a CI/CD (Continuous Integration/Continous Deployment) logic. It is likely that one will want to leverage both a GitHub repository and a DockerHub repository to fully support the CI/CD approach; with separate Dockerfiles for development and testing being called intothe Base container from DockerHub registry.
If you are reading a GitHub 'Readme' file it might be safe to assume you "get" GIT. But if not, familiarize yourself with how to use that before diving into this. Obviously this site has some good help if you are into reading "Managing workflow runs" (on this site). I think one of the easiest overviews is to see Git and GitHub in use with Visual Studio Code and there is a good video series on that "Visual Studio Code | How to use git and github" (https://www.youtube.com/watch?v=Fk12ELJ9Bww). Plus Visual Studio Code is a good way to work with Docker "Manage Docker Easily With VS Code" (https://www.youtube.com/watch?v=4I8CRAzPLD4) and Docker Desktop has made it easier to see and manage your containers. We will also see if we might integrate a Portainer capability in this repository at a later date (https://www.youtube.com/watch?v=d_yCqZIui80). But back to Git, for a good reminder of basics consider "A Git Cheatsheet of Commands You Might Need Daily" (https://medium.com/swlh/git-ready-a-git-cheatsheet-of-commands-you-might-need-daily-8f4bfb7b79cf). Plus, when your best approach is to take down the containers, clean the old local project install and start over, consider "Everyday Git: Clean up and start over" (https://everydayrails.com/2014/02/27/git-reset-clean.html). If you really want a fairly deep GIT/GITHUB training overview to include setting up your SSH connection for updating between local and GitHub, using branches and forking, there is about an hour and ten minute video worth watching called "Git and GitHub for Beginners - Crash Course" (https://www.youtube.com/watch?v=RGOj5yH7evk).
- Pulling down the clone:
git clone https://github.com/RightsandWrongsgit/12-factor-demo.git
*NOTE: this example syntax is NOT from a ‘Fork’. If you are just trying it out so see how Drupal works in a Docker container you can use that directly. But if you anticipate working with and then version controlling it for your own site you will want to fork the GitHub repository and change the ‘https:// … ‘ part of it to use your own fork’s clone address.
-
Now on your local host machine, change into the working directory of what you cloned:
cd 12-factor-demo
-
From the Command Line Run:
make standard_load
-
Run:
make standard_install
-
Run:
Make Setup
-
Run:
Make Start
-
Go to your Browser address bar type and hit enter:
localhost:7080
(Or, if you are using Docker Dashboard, you can now go to the running container and into the application). -
If you are going to start working on customization of the Drupal site with an intent to save it, you probably should also Open Visual Studio Code and click the 'Source Control' icon where you will initialize your local Git repository (Or do it in your own favorite Git/IDE environment)
The first file to take a look at in the GitHub repository is that “Makefile”. Think about it as the file that you are going to run to invoke the establishment (Make Setup) of a Docker container and then to run (Make Start) the container. Start with the line 'Bundle-Install:' and 'Start-Sync:'. These are behind making the Docker-Sync function work and the name 'gem' is in reference to the fact they are written in the Ruby language. So you might want to glance at the the 'gemfile', 'gemfile.lock' and at the hidden directory '.bundle' which contains this function's configuration files and supporting components. (Remember Cmd+Shft+period can be used to show hidden files on a Mac).
The lines you probably want to pay the closest attention to are the 'Setup:' and 'Start-Services:' steps. Recalling that you will run the (Make Setup) and (Make Start) steps from
the command line, you will want to trace what they are doing by looking under each. You can see commands that do typical things like Pull and Push images, invoke Docker to
install and build, and grab files like docker-composer.yml and docker-compose.dev.yml that define the container environment.
In the above line - "Start Services:"
Understand the sequence of multiple compose files at:
https://docs.docker.com/compose/reference/overview/#specifying-multiple-compose-files
\
It may seem unusual to have a 'makefile' as part of a drupal docker repository since makefiles are most commonly used for coordination of 'C-language' compiling. But 'make' more generally offers scripting capability much like you might find in BASH or one of the other POSIX shell command tools. It offers pretty common and generally easy syntax but also has an advantage of some date/time comparison triggering of updates across project components. At the most basic level, take a look at its use in a very similar situation to how it is used in this respository by reviewing this brief article (https://docs.php.earth/interop/make/). Some very sophisticated use of multiple and well coordinated makefiles can be see in this project right here on github (https://github.com/druidfi/tools). HERE IS A KEY HINT - If you see something you want to try to customize in the 'makefile' to suit your own purposes, don't be afraid to try it; the syntax is as simple as "TARGET:PREREQUISITES" on one line followed on lines below by TAB and the commands you want to run (WHERE YOU ARE MOST LIKELY TO MESS UP IS LEAVING OR USING TRAILING OR LEADING SPACES AND NOT USING THE TAB TO INDENT THE COMMAND LINES YOU ENTER) For a detailed understanding of how a 'makefile' works you might review the complete documentation at https://www.gnu.org/software/make/manual/make.html \
The ‘docker-compose.yml’ file is the place where the container environment is established. There are all sorts of videos and other resources that tell you how to set up the container so I don’t go into a bunch of detail here. The key thing to know is that containers are made up of ‘images’ and that ‘images’ are pulled from places like DockerHub. Images are the parts and are assembled into the whole; assuring that the parts talk to each other is one of the key things in defining the docker-compose.yml file. Think of what the ActiveLamp/12-factor-demo GitHub has in its docker-compose.yml file is just the core basics.
What we want to do is relate this docker-compose.yml file to how it works in the overall system. The core basics is fine in some regards but has limitations in others. Thus, glance back up at the "Start Services:" lines in the Makefile to understand how one might expand from the basics. You see in that line how docker-compose.yml is followed by the docker-compose-dev.yml file and the key of what you need to know is that the line processes left to right; meaning if you put additons after the docker-compose.yml file in a subsequent file on that line the commands in the following file will add to or override those in the preceding yml file. Why we care about this is that certain stuff we want in our development environment like tools to build our site but we want to remove them from being in the production environment so they don't create a security risk. On the flipside, there are some things we want in our production environment to make it faster or work across server, etc. that would get in the way if we had them in our development environment. And, of course, we want our testing environment to pretty much match our production environment but we may have some of the testing tools themselves that should be excluded from production. As this project repository develops, you might anticipate we will add further functionality in a Development, Staging-Testing, Production orientation to support CI/CD workflow.
Under the php: line in the docker-compose.yml file shown above, you will notice an image: line with a drupal:version. In the original ActiveLamp repository the Drupal image: was drupal:8.3-fpm but in this repository it has been changed; and YOU might want to change it yourself at some point if you need to. It really doesn't specifically matter which Drupal version is in this docker-compose.yml file because the actual Drupal site install is done in a later step via the Composer.Json file (which is discussed later in this documentation). However, the version used in this docker-compose.yml file should preceed or equal the one you plan to use in the final site install step to take advantage of backward compatibility. The reason the change was made between what ActiveLamp repository used and what this one uses is because you need an image to still be available on the DockerHub registry to be called in the build. The ActiveLamp image of 8.3-fpm didn't show on the list any more and thus one key is to update this starting point to a Drupal image that is available on the DockerHub list. If this repository isn't current when you use it, check DockerHub for an available Drupal image and do this edit beforethe "MAKE" step is run.\
Sharing what you do to your application on the HOST with the container that will run the application has a couple key aspects to it. If we think of Containers like 'vapor' the question must be addressed regarding how my efforts will be saved or persist after working on them. And we also need to address the question of how and the heck do I do work inside this Container in the first place. If our repository, we see how these questions are being addressed via loading the Docker-Compose-Dev.yml file following the load of the Docker-Compose.yml file in that 'Start Services:' line in the Makefile. This Docker-Compose-Dev.yml file is using the Docker "Volume" function to address that persistency question and the Docker-Sync.yml approach as a way to have edits you make on your local host files to syncronize with updates to the files in your Container.\
The concept of Docker Volumes is one of the standard ways that Docker itself has the Container store stuff on your local host. Inside the running container there is a writable layer for your data and you want it in there for performance. But to get your information to persist after the container is shut down for any reason, you want the information to be stored and Docker offers a couple ways to do this. Volumes is one of the ways and as this is being written, is noted by Docker as the preferred approach. You can learn more about this on the Docker site (https://docs.docker.com/storage/volumes/).\
Perhaps the most interesting syntax in the Makefile is the call to docker.sync; a function that basically is the coordination point for how your HOST outside the container talks with the inside of the container. Take a look at line 7 in the ‘docker-sync.yml’ file; it is telling this tool that we want to share the ‘./src’ directory on our HOST computer (and all subdirectories and files beneath it) with the container. We have a related instruction in the docker.compose-dev.yml file to tell php within the container where it is to get the files it needs. It says to get those files from ‘drupal-sync’ and then to make them available within the container in the ‘/var/www’ directory. In essence, this says “Use the files from ‘-drupal-sync:’ and mount them in the volume ‘/var/www’ within the container.\
REMEMBER THAT IF YOU CHANGE THE LOCAL HOST DIRECTORY FOR THINGS LIKE EXISTING SITES OR FOR A MULTI-SITE STRATEGY, YOU NEED TO ADJUST THE LINES AS NOTED ABOVE!
A change to the HOST:CONTAINER directory synchronization edit is made between Mr. Friedhof's first and second video. The reason that this change is made is discussed in the second active lamp video by Tom Friedhof; “Factor Two - Dependency Management with Docker, Drupal, and Composer” (https://www.youtube.com/watch?v=BhdSn6XlmWo). In a nutshell, the reason is because how and where he did the local host installation of Drupal using container; to /var/www/src rather than /var/www/html. The key thing to understand at this point is that the above change is in the docker-sync.yml file where line six is drupal-sync: and line seven is scr: '/.scr' now. Then in the docker-compose-dev.yml file note for both php and ngnix that volumes are declared where drupal-sync is telling the container where to locate the application files it is referencing from your local machine as -drupal-sync:/var/www:nocopy You might find "Docker Basics: How to Share Data Between a Docker Container and Host" (https://thenewstack.io/docker-basics-how-to-share-data-between-a-docker-container-and-host/) a good way to get an overview of what "Volumes" are doing for you in Docker.
Not to be confused with ‘docker-compose’ there is something called “Composer” that is a dependency manager for PHP. Drupal is written in PHP, and Composer has been essential to working with Drupal since version 8. Most people who have worked with Drupal are aware of Composer primarily as the thing that makes sure the “Modules” you add to the basic or core installation of Drupal all work in concert with one another; thus the picture of the orchestra conductor on the Composer website.(https://getcomposer.org/download/) PHP is inherent in some other aspects of Drupal as well for things like Twig in themes that make the site look attractive and the underlying Symphony component. Right now you don't need to dive into all its roles and value, just trust that you will be needing to use it now and a lot later on too. You need to have Composer installed at this point to continue (see how toward the end of this documentation in a section called: "SETTING UP YOUR BASIC SYSTEM BEFORE YOU GET STARTED").
Below you will see an image of a Composer.json file. Don't study this one too hard because it is just provided to illustrate a few points before you dive in deeper. The first thing to notice is the "name" and "description" and "type" which are common elements in virtually all you will see. The "name" is important because it is what you would use to trigger the use of the file from the command line. In the Quick Start step to install Drupal you saw 'composer create-project drupal/recommended-project:8.9.11 src' and what that is saying is for Composer to create a project from the instructions in a file named 'drupal/recommended-project'. The command further said to do the install with a specific version of Drupal and to put it all into the "src" directory we created to make a clean install into.
Before we get away from reviewing this composer.json file, the other important thing you should note is that it has a statement "require: {" (and another saying "require-dev: {) followed by a bunch of lines that name some specific Drupal components or elements that must be included in the installation you are making. The example composer.jason file shown above is for a standard installation. One of the strength of Drupal is its extaordinary flexiblity in terms of function and appearance through what are called "modules" and "themes", respectively. To make it easy by leveraging someone else’s good thinking, you can do a search for ‘composer template drupal’ to find additional good starting points. See if you find any other Drupal project composer.json files that others have created and review especially what they may have included in the "require: {" section of their files.
The very first thing you must do if you want something more than the standard installation is to change the line the the "Quick Start" section that says 'composer create-project
drupal/recommended-project:8.9.11 src' to say composer create-project --no-install drupal/recommended-project:8.9.11 src
. This puts everything you need onto the host computer
but without trigging the installation itself. That buys you the opportunity to now edit or move an alternative 'composer.json' file into the right location before triggering the
actual install process and Docker container builds. After you have the 'composer.json' file you want in place, then you will run `composer install'. You may want to take a look
at the Drupal.org website for a more complete discussion of Using Composer to install and manage drupal (https://www.drupal.org/docs/develop/using-composer/using-composer-to-install-drupal-and-manage-dependencies).
Add - create a folder on the github respository named 'composer_configurations' where you put a bunch of 'composer.json' alternative files as starting points to get people off the
ground running for specific common types of sites. Name them with intent so people know which to use and create one or more associated 'readme.txt' files with how to value. At
a minimum instructions on how to rename the specific use file to a functional file and move it to the correct location. For example, mv book.composer.json src/
followed by
cp book.composer.json composer.json
followed by chmod 664 composer.json
. Other starting intent sites might be 'forum.composer.json', 'location.composer.json',
'store.composer.json', etc. Before building these out, confirm if that these should really be the 'composer.lock' files to assure tight control of dependencies and whether just
the 'lock' file needs to be used or both it and the 'json'. https://getcomposer.org/doc/02-libraries.md Also, see if the modules loaded by composer or in many cases included
in the base Drupal install, must be enabled via Drush or if any alternatives might exist versus having to do it via the Admin Menu by the user. And see if it is practical to
move, permission, install, and enable the whole shot'n match with a Make Something set of commands I can just put in the existing Makefile with a set of sequential menu options.
Is this the best place to introduce preparation of an existing site composer update, stop, cron, clear cache, and build its inbound home via its absolutely most current composer.json file in preparation for backup and migration?
NGINX is a web server. The best way to think of it is almost like a traffic cop combined with a traffic engineer. It is the flow control point between your browser (e.g. Safari, Chrome, FireFox, etc.) and your application; plus between your application the the resources it utilizes. For a 100 second long video overview of NGINX, check this out https://www.youtube.com/watch?v=JKxlsvZXG7c
As a control point, NGNIX provides a map of the connection but it also gives you authority over the format of connections (e.g. http, https, tcp, etc.). Of equal importance is the fact that it provides control over things that impact capacity and speed, like the number of servers, work processes, cache management, file size, etc. But don’t freak out, a lot of this stuff is set with defaults from your starting point with your website and only after your success leads you to the performance demands of that success do you have to mess with a bunch of stuff. Then you can make edits or add to your nginx configuration.
One thing you do need a perspective on is how this works in a Docker world. The reason is that when you loaded the NGINX image with docker-compose.yml it brought along that base
starting point configuration and put the file into the container (in etc/nginx/nginx.conf
). I am not saying that with adding tools to your container or some creative remove,
and move of a modified copy back into the container that you can’t accomplish edits to this foundational nginx configuration file. But for the vast majority of anything you
might want to do, the people who put together the version that came automatically as part of the image load made it much easier on you; they ended that file with
include/etc/nginx/conf.d/*.conf;
. Again, don’t get overly fixated on studying this file but do notice that about ten lines down is a statement beginning with
http {
and that the closing bracket for that section is way down at the bottom AFTER the include /etc/nginx/conf.d/*.conf;
statement. In the NGNIX jargon we say that the
include statement, or any of the stuff in that section, are in the http CONTEXT.
There are three CONTEXTs in NGINX: http, server, and location. So if you look up how to do anything to modify your nginx configuration, the most fundamental thing to understand is what ‘context’ is applicable. Since NGINX is managing traffic, you might think that if you are wanting to map anything to a URL, then the context is likely to be in the ‘http context’. If you are wanting to talk to say an additional server that is dedicated to holding your image assets separately from your main application for performance or security reasons, you are likely dealing with the 'server context’. The ‘location context’ can be thought of a map to files in directories and subdirectories that you are pointing your application to so it knows where to look. A big TIP: NGINX starts looking in the most specific (typically longest) path specified in the location context and if the asset is not found works its way to the most general; thus why you usually see an application root level at the top of a syntax list having been set up to in essence specify the default but more performance draining backup location because thousands of files might end up being searched on the way to finding the one needed.
So back to how the kind people who made the NGINX image easier to you with that ‘include’ statement at the end. If you look in the docker-compose.yml file where NGINX is being
requested and its environment defined, you will see a ‘volume’ statement. The first item under that volume statement says
- ./config/nginx/site.conf:/etc/nginx/conf.d/default.conf:ro
What this is doing is setting up how the file ‘site.conf’ located on your host machine is being sent for use
within the container. Notice that subdirectory in the container ending is …/conf.d/ is the same directory specified in that ‘include’ statement at the end of the
‘nginx.conf’ file the image loaded. And how the file named ‘default.conf’ falls within the referenced files defined by ‘*.conf’. Now you can see how you can easily modify the
NGINX configuration by making a file on your host machine ending in .conf
and stating within a docker-compose.yml volume statement how to get it to function within the
container.
Putting this into practice, below you will see a modification to the ‘site.conf’ file that was in the original Friedhof repository. The change is a super simple one in the sense
all it is doing is changing the size of a file that can be uploaded; client_max_body_size. There is a default value for that size of 1M; the equivalent statement would be
client_max_body_size 1M
. But Friedhof didn’t have to put that statement in at all because he would have known the default. If you had a reason to allow a ten megabyte file
you could change it to client_max_body_size 10M
. Or, as has been done in the example below, you could set it to 0
(with no ‘M’ or ‘G’) and now you have totally eliminated
any limitation of acceptable file size. [Add - maybe ok for the CD/CI upload of imported full site files but shift this out in production].
In the above example an edit was made to the site.conf file of Friedhof by putting client_max_body_size 0;
in all three CONTEXTs. The first line is in the http context because
that is where it was when the nginx.conf ‘include’ called any *.conf files. The context of the other two are pretty obvious just from where they appear in the example.
So do you have to be super smart to do all this configuration stuff in NGINX? Well if you remember you need additional volume lines in your docker-compose.yml file to get them into the running container, then the answer is no. You probably can go shopping at the ‘boiler plate’ repository on GitHub that contains a ton of already developed configuration adjustments to serve many needs. https://github.com/nginx-boilerplate/nginx-boilerplate A little less friendly, but certainly comprehensive is NGNIXs own documentation https://nginx.org/en/docs/ When you get to that success level you need some performance tuning, start by watching this video https://www.youtube.com/watch?v=WMsqw68DhIg
Add - To handle the import of Tar.gz files under the Admin/Syncronization of site to bring in an existing site you will bang against upload file size limit very quickly. So both the NGNIX Conf file for the Docker build of that image and the PHP.ini file for Drupal need to have those limits increase, at least temporarily. Therefore I need to add this: Remember that you would want to make this an append to the basic Docker-Compose.YML file in sequence to add to and/or override any items in it. The logic is to establish a user.ini file that can temporarily adjust things like upload_max_filesize=2M and post_max_size=8M to allow the 'import' function of Drupal synchonization for moving an existing site into Docker; otherwise what happens is you get something like "Nginix 413 Request Entity Too Large" error. https://forums.docker.com/t/how-to-get-access-to-php-ini-file/68986/31
Of course you also get that error as a function of Nginx limits being set too low in the docker container build and you need to adjust the "Client_max_body_size=xxxM" value too in the Nginx Conf file. But that may be something that I will just keep in the base build after I do the edits for it, so it may not need additional adjustment. Final documentation to be adjusted accordingly. https://stackoverflow.com/questions/2056124/nginx-client-max-body-size-has-no-effect https://zgadzaj.com/development/docker/docker-compose/containers/nginx Full NGINX documentation https://nginx.org/en/docs/
Add - the key next thing to put in documentation is how you remove the docker-compose.yml installed .src directory and below, replacing it with an empty .src directory in which you run the correct version of (https://github.com/drupal-composer/drupal-project/tree/8.x). It is likely that this current repository should edit a branch and merge it back to MASTER with that step done. And that a copy of the Drupal install to put in it should be forked from the original to be stored as an addtional companion new repository in RightsandWrongsgit; but it should be edited to specifically make the install into the .src directory replacing the 'some-directory' statement in the orginal before forking. But this should be outlined in the documentation is fair detail so if these RightandWrong repositories get dated a user can do this update themselves. It appears the Drupal-composer/drupal-project repository is very actively managed so users can get the most current versions. But make sure to include a good discussion of what might be implied for 'major versions' and for key support like Composer, Drush, and PHP version continuity checks they should make.
This repository has an addition not included in Mr. Friedhof's at the project (top) level called db-backups. The original .gitignore file was also modified so that any database files put into that db-backups subdirectory aren't included in .git for pushup to your github repository. Thus, db-backups is basically have an empty subdirectory; but we put a .keep file in it so it is acknowledged by Git and retained as you clone/pull/push with Github. There are two broad reasons for this database backup subdirectory. First, remember we are attempting to build a 'WORKFLOW' logic to our design and you may logically want to have a 'work spot' to bring a copy of your production database into locally as you cycle through your work flow. Second, this respository is attempting to create a reasonably automated process to jump start a new site with relatively novice skills and this database subdirectory can house a canonical working database that may contain minimal site content resources related that configurs some select module, taxonomy, configuration information past what Drupal itself houses (with a project.sql.example name for inclusion the git/github management). This second point may seem odd, and Drupal.Org has extensive discussion of Drupal configuration management that is mainly managed with .yml files from Version 8 forward; but also comments how it stores configuration elements in the database. For additional discussion of these directory structure changes see "Dockerize an Existing Project" (https://drupalize.me/tutorial/dockerize-existing-project?p=3040).
Add - A specific project.sql.example file into the db-backups subdirectory and instructions here on renaming it.
Add - inclusion of and discussion of detailed workflow value of using the .env approach to environment management. See if the environment can be fully common between the Development, Staging-Testing, and Production elements of each supported site in a multi-site but simply use DockerHub registry held version of each workflow element with appropriate Dockerfile versions for each. (https://gitlab.com/florenttorregrosa-drupal/docker-drupal-project/-/blob/8.x/example.dev.env). Review the example .env file from this Drupal template for to make sure we are clean (https://github.com/drupal-composer/drupal-project/commit/2d48c40ad9a8187f12fda5ee74d1830f1d0086b4). It will also pay dividends to cross check best practices here (https://github.com/vlucas/phpdotenv). And, make sure this is coordinated with the credentials approach as noted here for 'autoload' (https://stackoverflow.com/questions/30881596/php-dotenv-unable-to-load-env-vars) remembering that the ActiveLamp starting point is working the autoload in the vendor subdirectory
Add - Discussion of increasing the size of file uploads allowed by NGINX who has a default limit of 1MB. This is done because most Drupal site Tar files that you would import in the Administration backup and restore transfer of a site's code are much larger than the default size. Also discuss the importance of either changing this back to something lower and/or including size limits in Drupal content types that allow contributor uploads so you reduce the risk of denial of service threats. (https://www.tecmint.com/limit-file-upload-size-in-nginx/#:~:text=By%20default%2C%20Nginx%20has%20a,http%2C%20server%20or%20location%20context.) & (https://scaledynamix.com/blog/increasing-file-upload-size-limit-in-nginx/)
Stuff I need to add to the functionality and documentation of this repository after the clean new install is fully running
Add - Discussion of a couple hosting option alternatives. The overview that shows how you put more into your Docker-compose and/or Dockerfile project definitions that later if its content around 16 minute in includes one realatively generic hosting option "Putting it All Together - Docker, Docker-Compose, NGinx Proxy Manager, and Domain Routing - How To." (https://www.youtube.com/watch?v=cjJVmAI1Do4). But also offer the deeper support option of Lagoon with Amazee.io hosting as discussed in "How to manage Multiple Drupal sites with Lagoon" (https://www.youtube.com/watch?v=R2tIivVvExQ&feature=emb_rel_end) | May want to work directly with Amazee staff on final coordination after providing them with the basic structure and logic from the other elements in this repo. | One key element to reconcile is that the multi-site logic of the Logoon approach discussed in this video takes precedence over the multi-site aspect of the "Dockerize an Existin Project" directory layout; but NOT over the database backup aspects. And, make sure a user discussion of the benefits over classic drupal multi-site config approach is provided in summary so they only have to watch the Lagoon video if they want super detail. And don't forget to include the "Secrets" approach to protecting credentials in corrdination with the gitignore specifications.
Add - cross check the original Active/Lamp approach to the Drush installation to make sure it is updated to the latest Drush version. And in the process cross check this repository on good Drush with Drupal practices (https://stackoverflow.com/questions/35743801/how-to-use-docker-with-drupal-and-drush).
Add - Review of the "Tools" to include in the Development version of a Dockerfile to add to DockerHub registry that can then be pulled by the common Docker-Compose.yml file that cuts around all workflow stages. For the tool set in Development (https://github.com/glaux/drupal8docker/commit/59b9821b0db96ad007e443ccf79fae8f2154dbe3). Then to get the Docker-compose.yml file and the Dockerfile to work with one another and share the same network consider the design logic discussed here (https://stackoverflow.com/questions/29480099/whats-the-difference-between-docker-compose-vs-dockerfile#:~:text=The%20answer%20is%20neither.,to%20your%20project%27s%20docker-compose.&text=Your%20Docker%20workflow%20should%20be,images%20using%20the%20build%20command.).
Consider Adding - A discussion of Path management so people who have any install hangups know where to look and what to tweak on their systems (https://stackoverflow.com/questions/36577020/php-failed-to-open-stream-no-such-file-or-directory). Include the Docker best practices for path management in the ENV approach (https://docs.docker.com/develop/develop-images/dockerfile_best-practices/).
Consider Adding - A discussion of options for customization of Drupal Scaffolding beyond that of an automatic install (https://github.com/drupal/core-composer-scaffold/tree/8.9.x).
Add - An overview of various approaches to Backups for your application and database. Using linked sources but relating them in the discussion to the modification made to the directory structure for local backups to be brought down and used as well.
Add - At least a discussion of MAKE and possibly scripting generally with emphasis on MAKE. Do this when considering automation of the workflow Docker-compose.yml and Dockerfile versions to be put on DockerHub registry.
If you are just starting out looking at different options for building a website, sometimes it can seem pretty complex. Often times people are used to using their computer with all the GUI (Graphical User Interface) applications we have come to enjoy. But if you are going to enter the world where you are developing your own website, you are now going to get a little underneath the covers. Drupal is a CMS (Content Management System) that really has extraordinary capabilities to do sophisticated websites. It has reputation of being 'hard' and in some senses that is true if you haven't ever programmed much before. Therefore, lots of people default to some of the very simple options for putting up a website with just a few pages; almost like printing several ads on different pages of a newspaper and telling on each page what page numbers to turn to if the reader whats to see the other ads. That's ok if you aren't interesting in changing the content to fit the site vistor or offer tools for them to explore what you are offering, or don't want to include forums, blogs, electronic purchases, etc. Even if you don't initially plan to use these more advance features, some people would still start out with a simple Drupal site just to position themselves to expand more easily as their needs grow.
The way this GitHub Drupal repository is set up, its aim is to make it much easier to get started with Drupal than many other options. That is because it uses something called 'containers'; actually pretty advanced stuff in some senses but set up here to kind of partially automate your start. The key thing about containers is that they contain both your application (e.g. Drupal) and its operating environment (e.g. the server & tools). That way you sort of spin up both at once for a quicker start. That said, it is still a little harder than just driving around existing applications you might have on your computer. And you will need to use things like a 'terminal' for 'command line interface' (CLI) to the guts of your computer. Therefore, this SUPERNOVICE section is provided to give you some guidance in the basics. It is stuck here at the end because many people who will use this GITHUB repository will be experienced developers who don't want to read through a bunch of stuff that they already know.
You probably are on a Mac or Windows, possibly even Linux; these are operating systems (OS). Other than knowing which one you are using and some occassional differences in how you do various tasks, I won't review the details of each OS. There are all sorts of things on places like Wikipedia that give nice overviews and histories. But one thing you probably should know is that each of these OS's has something called a 'Kernel' that is like a set of commands that get the computer what to do. And you talk to the Kernel with what is know as a 'Shell'. Again, there are a couple different Shells but they are pretty consistent at their most basic level. Mac is using 'zsh' but in most regards it is just a more capable version of BASH; and BASH is generally available across operating systems (if you want to argue that, then you obviously aren't a novice, so move on). BASH has some files that can be edited to provide 'scripting' (or a series of commands to be run). But for use here, all we want to do is to edit your BASH file that manages what your command prompt looks like (that > or $ typically present when you open a terminal). If you use a Mac you probably know that your FINDER application shows you where files are in various directories and on Windows it is something called EXPLORER. If you have worked with those you know you often have subdirectories below subdirectories below subdirectories. With FINDER OR EXPLORER you can visually see where you are when you point to a file. But imagine you are working just from a single line like a command prompt on a terminal and you could be lost as to where the heck you are. Therefore, we are going to start by modifying the BASH file in a way that make that prompt show you information. There is something discussed later called 'GIT/GITHUB' regarding your connection to a remote computer and we want to know where the heck we are on both our local computer and the remote computer. So the BASH file edit outlined next does both.
You want to edit the .BASH_PROMPT file. The 'dot' in front of it hints that it is a hidden file. On a Mac you can hit CMD SHFT DOT and it will show hidden files. Then copy the code from this link, paste it in and save it.* (https://raw.githubusercontent.com/mathiasbynens/dotfiles/master/.bash_prompt). Next time you open the terminal you should have a command prompt that shows you a colored two part prompt that indicates first what local subdirectory you are in and second where you on on GITHUB (since you probably haven't set up and logged into GITHUB yet it probably won't show anything for the second part yet, but trust me, it will later and be very helpful).
- You edit these in a text editor and if you are using the terminal in the apple/mac/iOS system you need to make sure that editor is set up for the right formats. That editor will use the keyboard function setting of your operating system and those are set in your “System Preferences” “Keyboard” in an OS system. There is a tab called “Text” in there and a box on that tab which, if checked, using “smart quotes and dashes”… Uncheck that box or you will be screwed up because the terminal can’t use that style quote and dashes in its commands.
Flesh these out with more explanation or source links but basically the following (note this is MAC oriented initially, then add Linux and Windows later) ...
- Install Homebrew
- Install Git
- Obtain GitHub account
- Establish SSH Key to GitHub account
- Install Visual Studio Code (possibly via homebrew vscode option)
- Install Composer (see Drupal.org background on Composer for lots of detail: https://www.drupal.org/docs/develop/using-composer)
Add - Include a quick discussion of how to get to the parts of VSCode to invoke a terminal to issue git/github commands, to bring up the command palette (like CLI) via shift-command-p, and to install extensions (plus what key extensions will really help you get started).
Sometime when you have problems it helps to know a few diagnostic tricks. So some are provide here to at least get you started. You can always search for more on specific topics if you at least get some of the basics covered.
The first thing that is often a problem is that some file isn't being found as it is needed. You know about directories from your Finder or Explorer look around your system. But how does one part of your system know about files in another part? This is where PATH comes into play. You probably saw something like export PATH="/usr/local/git/bin:$PATH" when you looked in the .BASH_PROFILE, .BASH_RC, and/or .BASH_PROMPT examples mentioned earlier. This is how you show one part of your system to another. Before you go messing around with adding or editing those, two tips: 1) at your terminal command line interface type echo $PATH and hit enter to see what paths are present already and 2) See those things that look like quotation marks... we need to make sure they are neutral, vertical, straight, or ASCII quotation marks, NOT the curly or curve kind from a word processor.
Sometimes you will get someting like :Command not found on your terminal when you open it even if the prompt eventually shows up and lets you continue. But you have no idea what command wasn't found, where, why, etc. What you want to do is to debug the Bash files. to do this put these two lines at the beginning and then the third line at the end of your Bash file, open the terminal and it will have given you a dump that you can trace where that Command not found kicked out.
set -x # activate debugging from here
w
set +x # stop debugging from here
Drupal is pretty good at giving warnings during the installation process of not being able to find a directory or file. Sometimes it is not clear if it didn't find a file because
it isn't there at all or because it can't get to it. So when you get a warning message start by using finder, explorer, or their counterparts to look over the directories and
files. If a directory simply isn't there, go to a subdirectory immediately over top of where the new directory you need and then type mkdir newdirectoryname
and hit enter.
Most of the time a directory or file at least has 'read' permissions so you will be able to see into it; (Remember Cmd+Shft+period can be used to show hidden files on a Mac). But
keep in mind, that there are occassion especially during something like installing, which require permissions greater than just seeing and reading a file or inside a directory.
The first thing you might want to check is to see exactly what the permission situation is on a file and you can type ls -l directory/directory/directory/filename.fileextention
and hit enter to find out (if already in the directory where a file is located you don't need to string out the whole directory nest where it is, just do filename.fileextention
) Or, if you just want to see for the files in the directory location you are already in AND include hidden files, type ls -al
.
Commonly someone might want to grant wide access to a directory during installation and type chmod 777 directoryname
and enter to do so; thus giving read, write, and execute
permissions to 'self/group/everyone'. You wouldn't want to leave it that way in a live website for security reasons so you would change it to a more appropriate designation then.
You might do chmod 644 filename
to grant read-write access to the owner(you) but only read access to your group and everyone else. Or perhaps chmod 664 filename
to grant
read-write to you and to your group but just read access to everyone else. If a file needs to be written to during the installation process you might chmod 666 filename
to
grant access of read-write to self/group/everyone for something like a settings.php file but you certainly want to lock permissions down for security reasons afterward. For more
on missions you might review (https://kb.iu.edu/d/abdb).
Sometimes you might question where Git is pulling its information and it it is set up correctly. To see where it is pulling from run
git config --list --show-origin
You will find at times you may benefit from being able to copy code from your local machine to another to which you have made a Screen Shared connection. You have another machine that shows up under locations in your 'Finder' and you see it listed as a location. You have right clicked on the other machine and requested to Screen Share it. This allows you to look at the files on the other machine and even move from directory to directory as you browse them. But sometimes you find a section of code that you want to copy and then bring it over to your local machine and paste it into another file you have open there. When you do the paste, what you thought should be in the clipboard isn't and either empty or whatever you had last copied locally is pasted. The copy-paste across machines doesn't seem to be working. All you need to do is go to the machine where you have the 'Screen Sharing' open and move right on it's menu to the 'edit' button, pulling down the list, click on the 'Use Shared Clipboard' option and you will be all set.
There are a number of things that are controlled in your php.ini file and it loads so early in the process that you can't do some things in say your settings.php or your
local.settings.php file to change them. Therefore, you may need to edit the php.ini file directly. Some common things might be to edit the memory_limit=128M, the
upload_max_filesize=2M or the post_max_size=8M entries to increase or limit their values. Do remember that the post_max_size should be set a little larger than the
upload_max_filesize value unless they are very large (e.g. 100M or more) because the uploaded file likely has some additional associated context material with it. To find the
location of your php.ini file, at the command line type php -i | grep php.ini