-
Notifications
You must be signed in to change notification settings - Fork 39
Carlos notes on installation
Aadi Bajpai edited this page Sep 2, 2021
·
6 revisions
Rough notes based on instructions given over slack.
This page should server as the basis for proper documentation and a decent install script.
Repo URL: https://github.com/CCExtractor/beacon-backend
Locally you will need this installed:
- serverless : https://www.serverless.com/framework/docs/providers/aws/guide/installation/ (note - most links are broken in the docs)
You will need accounts in the following places:
- mongodb.com
- AWS
- redis.com
Also you will need
- A SSH keypair (which you probably do already, you can just use your usual public key)
- A subdomain or domain you can use, unless you're really cheap
Walk through (order could be improved)
- In mongoDB, create an atlas cluster : https://www.mongodb.com/cloud/atlas , then create a database and a user that can connect to it. Make a note of the DB name, the user name, the user password. Free tier, stick to defaults, save the connection string
- In AWS' IAM, create a new user that can run Lamdba functions: https://www.serverless.com/framework/docs/providers/aws/guide/credentials and https://www.serverless.com/framework/docs/providers/aws/guide/credentials#creating-aws-access-keys
- Give those credentials to serverless: serverless config credentials --provider aws --key thekey --secret thesecret (command could not be exactly that, so check)
- Clone the repo if you haven't done it already, and checkout the "aws" branch
- Copy the .env.sample file into .env and edit it. The first line (DB) comes from mongodb's connection string. The second line (JWT_SECRET) can be any string.
- In the terminal, run "serverless deploy" inside the repo directory.
- In AWS console, EC2, keypairs, if you don't have a public key added yet, add one now.
- In AWS console, provision a EC2 instance (free teer OK, such as ec2 micro + ubuntu 20.04). All defaults are OK. Use your key pair from the step before so you can SSH. Make a note of the instance ID.
- SSH into the EC2 instance once it's running (user "ubuntu"). Install nvm (node version manager). https://github.com/nvm-sh/nvm
- In the SSH session, nvm install node && npm i && npm run
- In the SSH session, clone the repo, this time stick to the "master" branch. Copy the .env file from your machine to the EC2 instance.
- In the SSH session, install npm (apt-get install npm) so you can then install this https://www.npmjs.com/package/pm2
- In the SSH session, pm2 start index.js
- In redis.com create a subscription (free tier), then a database. All defaults are OK. Make a note of the redis DB password (this is NOT your redis.com user password) and connect URL.
- In the SSH session, edit the .env file and add the redis stuff. Ultimately the file (in the EC2 instance) looks like this:
REDIS_AUTH = errRXXXXXXXXXXXXpeGa42
REDIS_URL = redis-17606.c16.us-east-1-2.ec2.cloud.redislabs.com
REDIS_PORT = 17606
(the port can be something else)
- Stop (don't terminate) the EC2 instance.
- In your local machine, also edit the .env file to add the same redis stuff, plus this line:
INSTANCE = i-077f0f20d97df3333
Of course use the actual EC2 instance ID.
- Again in your machine run "serverless deploy"
Questions
- Where is that JWT_SECRET used?
- How do we connect this to our own domain?
- How do we connect the frontend with this backend?
TO-DO
- We haven't hardened the connection to the DB, it's still any IPs
- Looks like we deployed apollo in dev mode, not prod, so there's an studio anyone can have access to
- Service start must be automatic in the EC2 instance. Create a systemd unit file for this
- Automate EC2 instance creation and everything else that can be automated (maybe the redis part too?)
- Probably broken: We stopped the EC2 instance but since we don't have a startup script nothing will happen when it's launched.
Troubleshooting (places to look for problems) AWS console -> Lambda -> function name should be apollo-lambda-dev-graphql -> Monitor -> logs → view logs on cloudwatch
This will have logs in groups of logstreams based on times that can be accessed to view the actual logged messages and errors.