How I Have Learned Not to Be Afraid and Love SHELL
SHELL, GIT, SSH and Much More…
When computers were big, and users were skillful, that was enough for everyone. Then computers became less in size, began to be located not only in a room but on users’ tables! There were more and more users, tasks became more and more various, many started feel the lack of it and the graphic user interface (GUI) was invented. Over time many turned away from our hero, began to be afraid of it and despise it for that… But, guys, we are developers and we should know it. So, our topic is:
How I have learned not to be afraid and love SHELL
Today we will talk a little about how we use our computer and what tools we have for this purpose. In general, a little bit about everything. I’m going to warn you right off that a lot of things you already know, but as practice shows, not all of you and not everything!
Generally, we will begin from the moment when you come to work, receive a machine and have to start working. But at first the machine needs to be adjusted.
What does setup begin with?
We will assume that OS is already established, pure and virgin… Of course, we need to start from SHELL! You get into your darling Konsole, Gnome-Terminal, Terminal ( underline as necessary) and give commands to distribute sudo apt-get update
and so on…
For a start, we will get acquainted with the main implementations of SHELL (ZSH
, BASH
, .profile
, .zshrc
, .bashrc
…) and with what seasoning they are tastier.
Of course, everybody has used a console. It starts SHELL, the great and awful! But you shouldn’t be afraid of it, as SHELL is our friend and with its help it is possible, if I may say so, to do miracles!
In the majority of systems, BASH
(the implementation of SHELL) is established by default. This thing is very suitable, with a bunch of settings, parameters, deservedly used around the world. But for myself, I discovered something different and more advanced — ZSH
. It is held out as further development of BASH
, with preference and improved autocomplete.
A simple example is autocomplete. Pay attention to the picture below! ZSH
is on the left side and BASH
is on the right. Apparently, ZSH
iterates values by TAB
(later it is possible to use arrows), BASH
will simply show the whole list and that’s all. It’s inconvenient!
Or the task that is typical to me: to get to the server from the webapp and to launch it. ZSH
guesses what I need, but BASH
needs to be set up for this purpose.
Of course, BASH
can be set up perfectly, but it’s pretty complex! Oh-my-zsh
is a remarkable thing for ZSH
. It is a set of scripts and settings that consist of the ton of modules with autocomplete for every occasion, convenient things and so on. For example, autocomplete for the node
, npm
commands and so on.
All right, we’ve dealt with it. Let’s go further!
What is alias?
Well, the word alias speaks for itself — a pseudonym. I just wanted to remind you that alias for the ls -l
command is different than, for example, for application deployment on the demonstration server.
Ah yes, all these settings should live somewhere.
Meet the .BASHRC
, .ZSHRC
, .PROFILE
!
The logic here goes like this: .ZSHRC +.BASHRC
are configurations of the appropriate implementations of SHELL. But if you want some commands or settings to be available in both SHELL, I recommend to move them to .PROFILE
, and then to simply attach to both configurations. By the way, when you modify the *rc
file, you don’t have to restart SHELL! It is rather simple to write source ~/.zshrc
and that’s all!
A little more about the setup
Let’s say, some people love to try using various software. Instead of installing into different system folders, we can easily create something like this (~/bin
) in HOMEDIR
and put there everything that is unworthy to be installed into a system, but rather useful to be held on the winchester disk. Various self-made scripts are here (chmod +x script_name
is all we need).
I think, everybody noticed such an amusing variable as $PATH
. So, if you want the scripts to be launched directly from a console without specifying the full path, it is enough to indicate in a configuration PATH= to path/to/script: $PATH
and just enjoy it. The main thing is to always specify this path in the string, otherwise, you will overwrite everything and the console won’t be able to find normal environment (the variable is responsible for the search of executable files path). In general, that’s all. There is only one thing left — as I move my path to the beginning, it will be polled first regarding a desirable command (a software or a script), and then if it’s already empty, SHELL will go on to the remaining folders (the priority will be to the one found earlier). Well, we can define $PATH
eternally, that is with as much strength and determination as we’ll have. The main thing is to always deliver $PATH
at the end of each determination so that you don’t overwrite the previous one.
Also, there is a print redirection (stdin, stdout, stderr), pipes, scripts, etc.:
free -m
, top
, htop
, wall
, ps aux | grep “ababagalamaga”
, ln -s
, sed
, less
, tail -f
, man
etc…
But we move further.
Everybody’s favorite GIT!
Phew, at last, we have dealt with SHELL, the system works, it doesn’t collapse and in general we are pleased with ourselves. What is next? It seems that further the source code goes: editors, IDE and so on, but we will factor it out. For example, I use Atom + VSCode
. The source code is good when it is looked after.
Most likely, you already use GIT. If not, than I will specify a couple of GIT commands for you. Here is what you need to know about it:
- You should use it from the console. For example,
git status
will help you to see the code/file that was not added. So it will be harder to miss something. - Not all IDE, editors with expansions and others support all GIT commands.
- Console helps to recheck once again whether everything is right. Thus, you can notice that changes were made in the
master
, but not in thefeature/my-breaking-changes
(for example). - All in all, it is cool! You sit like a hacker from movies of the 90th and code something. Well, who didn’t want to be a hacker in childhood?
I would like to accentuate the following commands:
git stash
git diff
(— cached
)git log
git show
git checkout
patch
git blame
So, one by one.
Git stash
Everyone was in a situation when there is some task, you make changes in your branch and when the project manager/tester/designer comes you understand that you should make certain changes in another branch. There is no wish to commit again and at the same time lose changes too! What to do, where to run?
Actually, git stash
! Briefly and rather roughly, it turns out that stash
transfers all changes to stack
, clearing the current changes. As a result, you have a pure branch that you can switch from to anywhere, do some work there, switch back and make git stash apply
. After that the top element of stack
will be applied and you will continue working from where you stopped.
A couple of specifications: it is possible to apply changes in any branch and it does not matter where they were made. It gives space for maneuver everybody occasionally forgets to switch a branch before some changes. By the way, in this case, I personally do git add.
instead of git commit -a
, and then commit after git status
and git diff — cached
. It means that when you are going to save changes you notice that it’s a different branch and you do git stash
, then switch it, do git stash apply
and enjoy it!
Well, the second: this git stash apply
doesn’t delete changes automatically from the stack
. That is, when you doapply
for the first time, the second time you will be able to do it in the other branch and so on (it can be easily fixed by git stash clear
). If no changes are needed in the stack then instead of apply
we use pop
.
Git diff
The next is git diff
. In general, this functionality is basic for all IDE, plugins and so on, but we work in the console. It can be unclear at first, but only until you get used to it.
Git diff
(as well as the other GIT commands), have a ton of options, but I often use — cached
. That is git diff — cached
and voila — you can see the changes added to a commit (that is, after git add
, but before git commit
). So what? With diff
, it is possible to compare any branches. Even file by file!
Git log
Everybody knows precisely the git log
command. I just want to add a couple of words. First, it provides a convenient history of commits here and now. Secondly, please, write clear comments so that nobody needs to guess what fix
and up
comments mean.
Git show
Git show
is also good! It is extremely convenient sometimes to specify what changes were in this or that commit!
Git checkout
Git checkout
. Hmm, I don’t remember why, but if the command is on the list, probably it is worth it!
Generally, I most often use the git checkout — .
command which takes down all current changes, and git checkout -b branch-name
which creates a branch from the current one and switches to it. I think everybody knows about git checkout branch-name
(without -b
). It helps to switch to a branch... But not all of you know that it is possible to switch to a certain tag / commit! Again with the help of git checkout
! You just write git checkout 56a93ae842
and it’s done — you are already switched to the needed commit.
Patches
There is one more remarkable thing! I will describe it simply, and you decide whether you need it or not and where it can be used: git diff
creates a patch (git diff > file.patch
), and git apply
applies a patch (git apply file.patch
).
Or a situation when at the last minute it turns out that you forgot to add changes to your commit. Of course, you don’t have to be upset immediately. If changes are so far made locally and are not yet sent to the server, just add the needed with the git add <changed_file>
command, and after that, it is possible to add them to the existing commit: git commit — amend
. Therefore, we can add the forgotten file to the commit, change the comment and so on.
Git blame
At last, perhaps, one of the most remarkable GIT commands is git blame
. I am sure, not everyone knows it. Sometimes it is convenient to see who added some code, when and why. By the way, the same is on Github, Bitbucket, Gitlab… well, everywhere!
It is worth mentioning that GIT has a quite responsible reference git command — help
and there is a lot of literature in the network.
A little bit about SSH...
Here I want to admit that I was a little lost in choosing the section where it needs to be put in, but decided to leave as it is.
So SSH literally means Secure Shell. That hints there was also insecure shell — Telnet. Generally, SSH allows to do all that you are doing on your machine, but only somewhere there (by there I am implying from the virtual system that is started on a random computer to the mainframe that manages the work of some oil company or, at the worst, the web server on Dropbox or elsewhere).
Well, as there are not so many tasks for SSH than there is nothing to tell here. But some things definitely should be known.
For example, SSH is rather convenient. You don’t need to enter the boring password every time to be connected to a remote server! It is enough to generate a couple of keys (we put the public one on the server and we cherish the private one) and next time we get on the server without caring at all about memorizing the password. By the way, the key (its contents) should be put in HOMEDIR
of the user under which you get on the server into the folder/file .ssh/authorized_keys
. It is easy to copy contents and insert a new line into this file. There can be a set of such insertions.
It is possible to keep a set of SSH keys (personal, working, etc.) and use them all at the same time. SSH supports parameter -i/path/to/private_key
which will help you with it (we attach an input command to alias and it isn’t necessary to register -i
each time).
If after all manipulations it still requests the password, it means that something has been done wrong.
By the way, SSH has also a verbose mode with setting the -v
(or -vv
, -vvv
; the more V
, the better SSH works).
What about SCP?
SCP is a very similar command. Only it works not as SHELL, but as CP — we write scp user@address:/path/to/file-or-dir/path/where/store
(and if we are copying the directory instead of the file, we add -r
) and we receive files to ourselves. It also works in the opposite direction. And as SSH, it is able to recognize keys (if there is a key then it is accepted by the server, no key — it will ask the password).
We are close to the interesting part (I was surprised that not everyone uses it!).
Bitbucket…
Of course, everybody knows about the opportunity to receive/implement changes through the HTTPS, but it is inconvenient! It’s too much to drive the protocol each time which is not intended for this purpose, moreover, it permanently requests the password! It is far simpler to generate a couple of keys, upload them to Bitbucket, and set up the repository for the use of SSH instead of https! That’s all! Now you don’t need to enter the login/password every time! Nevertheless, please, be careful and don’t implement changes that can be painful for everyone.
It is worth mentioning here that the access through SSH on Bitbucket and Github by a key has a nuance: in the case of working with SSH, the server recognizes us by a key. On one hand, there is no trouble, and on the other… If you have a personal and a working repository (respectively, 2 accounts), one key is not enough — Bitbucket simply won’t allow adding one key to two different accounts! In this case, it is necessary to do certain manipulations for the initial setup. For example, we need to edit the file of a configuration of SSH with ~/.ssh/config
. And then, instead of the real HostName
we need to use its alias from the SSH configuration. Besides, in the same way it is possible to get rid of explicitly specifying the key which we need to use.
Welcome, or no trespassing (server && root access)!
Actually, a wreath of our waitings, expectations, and trends — the server (demonstration, dev, production — not important!), where our adored project works! We are able to work with it, our hands vigorously type commands and — oh, no, something occurred!
For a start, an old administrator’s joke: “to edit firewall on the server remotely leads to the road”.
What can I tell about the server? Perhaps, first of all, it is worth understanding that it isn’t recommended to be logged as a superuser (that is root)! Why do you ask? Well, at least because it is easy to make a mistake and to write by accident rm -rf /
instead of rm -rf. /
. Many (I also came across it more than once), I think, know the result. Try to launch the same as an unprivileged user and only you will suffer. For the same reason sudo
, that allows launching the subsequent commands from the superuser, should be used as seldom as possible.
Example:
cats@alkor ~ $ whoami & & sudo whoamicatsroot
Also, you shouldn’t forget that carelessness seldom remains unpunished. For example, think how many npm packets you check before installation and use in the project. And if the “defective” code is set or launched with privileges of root
, it will be much easier to do a lot of harm to it. Generally, you shouldn’t facilitate the work of potential wreckers.
The change of access rights to files/folders can relate here too (not necessarily the server). By the way, everything told before easily relates also to the local machine, but on the server, it is expressed much more brightly! It should be done as seldom as possible. Only if that’s the only way out. For the same frontend/backend/homepage in most cases, a random user is enough! Just apache/nginx is set not on / var/www
, but on, for example, /home/myapp/webapp
and it’s done.
It’s the same with the database access — if the access is needed, you create a user with the maximum rights only for your DB, but not for all DBMS and enjoy your life. It allows avoiding many surprises in the future.
Finally, there are two major issues:
- If an error occurs in the application, always read the log which is a file with records about events in a chronological order! Let them be “not informative”, but this ability is a must have. So many times, I neglected logs, and as a result, it turned out that there was a description of a specific problem in this file so it could be quickly solved.
- The server is not a toy. There can be an unknown number of demons/processes/tasks of other developers. Not all these processes automatically rise and can handle unexpected switch-offs with firmness. Some settings can be simply not saved. We will leave all this on the conscience of administrator/developer, but the fact remains. That is, if suddenly a service stopped, the demon froze, other embarrassment happened to your application, you shouldn’t panic — there are always logs and if it tells you that your DB application/server “feels” bad, there is a set of methods to make it “feel” good. And the reboot is one of the last options (besides, there is no evidence that it will help!).
For example, you can imagine a big piece of iron — such a “multiarmed” server that has a set of network interface cards, disks and a lot of other things sticked to it. It is so big and expensive, there are lots of important and resource-intensive tasks are performed. And the unforeseen happens — some service fails! Very often many-valued digits calculate a minute of idle time of similar equipment. Well, the server due to the features of “structure” and services which are performed on it can be launched during a rather continuous period of time.
In this situation, it is much better to look through the monitoring, read a log before/during/after the failure and reboot the service! The server will function and you will get off with an easy fright.
For example, for systems with Systemd, it will be similar to sudo systemctl restart SomeService
.
Sometimes it doesn’t help and the service doesn’t respond for various reasons… then you should be extremely careful!
Ps aux | grep -i “servicename” ->
will show information about the service (PID, Proccess ID, path, commands, memory and so on) after that we can make kill PID
or kill -9 PID
(more rigid option), it kills the selected process by PID and quietly reboots the killed service.
There is a caution: if you don’t know what you do or not sure what you are doing, you better ask the skilled colleagues! Or the administrator.
Always remember: someone else can always live on the server! For example, your colleagues!
For example, who
will show who is connected at present (user, ip…)
The wall
command will help you to communicate with other users.
I give you an opportunity to think (or to search?) how exactly it is possible to use the obtained information.
Actually, observance of these simple manuals will allow avoiding shouts from the other end: “The server is not working again! Who has rebooted the server?”.
I hope, my article was not really boring and even useful in a way!
“Instead of an epilog”
I have got such a “motley crew” of utilities, general errors or just data that can be useful for a novice and not really Jedis of the keyboard. Considering time restrictions, I told you not all that I wanted and should have! Flow redirection of input/output aren’t mentioned, nothing is told about scripts and so on and so forth.
However, that’s another story and someday I will definitely explain it to you!
If you liked this, show your support by clapping us to share with other people on Medium.
Follow us on Facebook, Instagram, LinkedIn, Behance, Medium and visit our corporate blog for more news and articles on smart solutions.
Any questions? Feel free to contact us!