More testing!

Thanks to the fine folks at LinuxAcademy for having a Black Friday sale, I bought a year’s subscription and have started a path towards the Certified Systems Engineer tests. I can probably take the systems administrator test cold and pass, but prepping for the certification will help me fill in the holes on the tasks and skills I rarely use, but validate the stuff I have to do all the time. I feel like I am giving all my spare cash to testing and prep companies, but I think it will be worth it, if for no other reason than to continue to be a life long learner, rather than just binge watching video content all day.

My goal is to take either the AWS DevOps Engineer Professional or the LPIC-101 / 102 by the end of December. I am also working on the following courses

  • AWS Specialty Networking (LinuxAcademy)
  • LPIC-101/102 & 201/202 (LinuxAcademy)
  • Node.js (Udemy)
  • Python (Udemy)

I’m not yet certain if getting all seven AWS certifications is necessary. The Big Data cert is an area where I haven’t spent any time, so it may be the most challenging to get some practical experience. Not sure where it might fit in education at my current level.

I’m still trying to decide if it will be worthwhile to invest in a master’s degree. I didn’t finish my master’s in instructional technology, and it doesn’t make sense to go back to finish it as I will not be doing any classroom teaching. There is an online program at Georgia Tech that looks interesting. Let’s see if the certifications get me anywhere before I start sinking serious money into an advanced degree. Onward and upwards!

Creating my own backup tool with AWS

As of today, I am no longer a CrashPlan customer, as they have phased out their retail client service to focus on larger customers. I looked at other solutions, but $5 per month per machine seems to be the going rate, so I figure I’ll try to create my own solution and save some money. I need an automated backup solution for my home computer and my work laptop.

Obviously, this is a problem already solved by many people (Duplicati, for one), so this is just a learning experiment for me.

My first attempt is to use command line tools to automate a copy from the local file system to an S3 bucket.

  1. Create the bucket.
    1. Pretty straight forward, except that this is my first time using encryption at rest. I chose the default AES-256 encryption, which should protect the data anyone but me accessing it. I also chose to create a lifecycle policy to rotate the files to cheaper storage in 30 days, then to Glacier after 60 days, then retain for 10 years. I figure if I haven’t accessed a file in 10 years, I really don’t need it anymore :-). Initially, I am putting all client backups into one bucket. I think the next iteration will be to create the bucket programmatically and pass the bucket name to create the user and policy.
    2. Create the user.
      1. I have created a programmatic access user for each device that needs to be backed up with S3 access. Once the concept is proved, I’ll tighten it down so only this user can access only their specific bucket.
    3. Add the user profile to awscli.
      1. My first time configuring multiple users with the –profile option, very easy to do.
    4. Test the command.
      1. My first attempt was to use the aws s3 sync command to begin the copy. This command is being run from a Windows 10 machine with bash (ubuntu) installed and aws configured.
         aws s3 sync --profile myClient /mnt/c s3://mybucket.backup/

        This worked fairly well until it got down into some the Windows internals, then stopped. I don’t think the OS liked that. So, I chose to update my home folder and backup drive instead. Having the two processes running at the same time seemed to speed things up a bit, my clumsy attempt at manually parallelization 🙂

So far, so good, taking a while to push about a terabyte of data over my home internet access. The next step will be to automate the process, which requires learning how to create recurring tasks in Windows and Mac. I wish all OSes would just use crontab, sigh…

I’ll do some analysis of the costs over the next 90 days and see if I can actually save money. I think the data transfer and regular S3 costs might be more than $5, but the lifecycle policy should kick in and as long as you are doing differential backups, then the data charges should be much cheaper too. Let’s see how long it takes this service to pay for itself…

The next step will be to develop an app client for this process, which is a bit more than I know how to do at this point. I’ll dig through github and see if anyone has built this sort of thing already.

 

NOTE: Found this after I started experimenting, official step by step.

A moment of pause

My resume isn’t getting much interest, and obtaining the AWS Solutions Architect Professional certification has not been effective fairy dust. Still lots of work to do. My resume needs to be rewritten to focus on technical implementations to attract solutions architect and devops engineer opportunities, less about title, positions and management experience. I’m working on the DevOps Professional certification next, and expect to take it by the end of the year. We’ll see if having 5/5 will be enticing to tech companies. Clearly, my network needs work. I need to get out and meet people. I’ve got a meetup this Tuesday, which was hard for me standing around last time, I’ll see if I do a better job networking this time.

After I got my last certification, I took a bit of a mental break from blogging, but time to get back at it. Reflection is very useful, if for no other reason than to get your thoughts organized. Time to drive forward!