Linux for PHP Developers

Jon Peck


0. Introduction

0.1 Welcome to Linux for PHP developers

Have you ever wondered what happens when you type a URL into the address bar of a browser and hit Enter? A single view of a website involves the coordination of dozens of technologies, such as web servers, databases, scripting languages, and more. In a world of turnkey solutions, it's important to understand how these fundamental systems work together. I'm Jon Peck, and I've been architecting large-scale web applications for more than a decade. In this course, we're going to install and configure a complete Linux-based web development server for PHP. We'll start by exploring how the fundamental components of the web work together. Then, we'll install and manage a complete Linux web server optimized for development. Finally, we'll learn how to troubleshoot each component effectively. Throughout this course, we'll explore and use common Linux commands, server components, and software useful for debunking and development. In the end, you'll have a virtual development server running like any other program in your existing operating system. So, no reformatting, additional hard work, or dual-booting is needed. I'm passionate about building great systems the right way, so let's get started.

 

0.2 What you should know

Linux for PHP developers is designed with the assumption that you don't have any experience with Linux or server administration. Every command and technique will be described and demonstrated in context so nobody will be left behind. Do you already have some Linux experience? No worries. You can always learn more about configuration, best practices, and how everything works together. This is a systems administration course for PHP developers, but this is not a PHP development course. You should already have a working knowledge of the PHP language, and have written a few scripts. Without this background, you might not have enough context to follow along with what I'm doing, which will make it harder to enjoy and learn. For some background, or a refresher, I recommend Learning PHP with David Powers, here in our library. Local web development is a very common need, and there's always more than one solution to a problem. For example, XAMPP from apachefriends.org has versions for Windows, Mac, and Linux. There's also WampServer from wampserver.com which is only used for Windows. And MAMP from mamp.info for Macs only. Each of these options has their advantages and disadvantages and I suggest you explore what's available. If you'd like to learn more about local web stacks, check out Installing Apache, MySQL, and PHP with David Gassner here in our library. Some of the demonstrations include using Git, a popular source code management and revision control system. No prior experience is necessary, and the use of Git is not required for the development server. However, it's extremely useful, and I strongly recommend some kind of source code management. To learn more about working with Git, watch Git Essential Training with Kevin Skoglund, here in our library. This course will be using PHP 7, which at the time of this writing is the current major version. PHP 7 is great for new and current development projects and should have no compatibility problems. With that said, some older and legacy applications may need to be updated. In those cases, and especially for custom code, I also recommend checking out the PHP documentation on how to migrate from PHP 5 to 7, on php.net. If you can't, or won't use PHP 7, other versions of PHP can be installed, but that will not be covered or supported in this course. Finally, a note about compatibility. Historically, the software used in this course is stable in forwards-compatible, meaning that the instructions should work with future versions. With that said, software does update and evolve and newer versions of some of the software may have a slightly different look and feel. If that happens, don't panic. While a label, icon or description may change, the functionality and intent remain the same, and the instructions and guidance in this course will still apply. For convenience, links to download the exact versions of the software used will be available on the course homepage.

NOTLAR
Learning PHP with David Powers
Git Essential Training with Kevin Skoglund
php.net
https://en.wikipedia.org/wiki/PHP

 

0.3 Software prerequisites

Before we begin, you'll need to download and install a couple of free programs. Don't worry about configuration for now. Just having them installed is sufficient. Secure Shell, or SSH, is a secure method for remotely logging into the command line and executing commands. We'll be configuring the server over SSH so you'll need an SSH client. If you're using a Mac, everything you need for SSH is already installed. The application that you'll want to use is named Terminal.app, which is in a sub-folder under Applications called Utilities. From the Finder, click Go. Go down to Utilities, and double click on Terminal. If you're using Windows, you can use the free program PuTTY to connect via SSH to remote servers. PuTTY is available from the official PuTTY website. Go to Download: Stable. Scroll down, and use the 32 or 64-bit version. Read the FAQ entry if you need more details. You'll need some sort of IDE or text editor to edit files. My demonstrations are going to be using Atom from atom.io. Atom is a cross-platform, free and open-source text and source code editor. You don't need Atom to use or administer the server. There are many other IDEs, both free and commercial, that will work just as well. I'm going to demonstrate how to connect to a database using a visual MySQL Client. MySQL Workbench Community Edition, available from dev.mysql.com is a free, open-source and cross-platform MySQL client. It's useful for managing databases, including designing the structure, importing and exporting the content and so forth. You don't need MySQL Workbench to use the database server. Other MySQL clients will also work. However, I will be demonstrating its use. You may be curious about PHP applications such as phpMyAdmin, or Adminer, that can be installed on the development server. While prior versions of this course included phpMyAdmin, I'm concerned about providing a popular target for vulnerability scanners if used on a public facing web server. They're not bad applications, but the potential for abuse is high. At the end of the day, you can and should choose the best tool that meets your needs. Please be sure to make an informed decision.

NOTLAR
https://www.chiark.greenend.org.uk/~sgtatham/putty/
MySQL Workbench Community Edition
Adminer, adminer.org

 

0.4 Exercise files for this course

If you have access to the exercise files for this course, you can download them to your desktop. The exercise files for this course are contained in three folders. Configuration, which contains service and program configuration files, MySQL, which contains some of the queries that will be used during the course, and Sandbox, which contains executables used for demonstrations. This will help you identify areas of optimization and research. In addition to the exercise files, a free quick reference to the Linux and server commands covered in this course is provided as a PDF. Feel free to print out and distribute this quick reference. If you're viewing this course on a mobile device, a set-top box, or your membership doesn't provide access to the exercise files, don't worry. The exercise files are a convenience, not a requirement, so please continue to follow along as we progress through the course. Now, let's get started with Linux for PHP Developers.

 

1. Getting Started

1.1 Networking fundamentals

The internet is a fantastic, complex, and continuously evolving modern marvel. Its power comes from the focus on the message, not on the medium, meaning you can produce and consume content without having to understand how it all works. With that said, the real fun comes from knowing how this thing is put together, so we can contribute our own unique ideas and systems. Broadly speaking, this course is about practically demystifying how the internet works, specially in the context of PHP. This chapter provides the what and why of web servers in both abstract and practical ways. We'll apply these lessons throughout the rest of this course. It's important to have a common vocabulary and context before introducing more complex ideas. Let's start with some fundamental networking concepts that are useful for understanding how you'll be connecting to and configuring your server. The internet that we're using is built using the Internet Protocol Suite, which is a standard for the model and communication protocols used. When this architecture was first developed, the original protocols included the Transmission Control Protocol, or TCP, for data transport and the Internet Protocol, or IP, for routing and addressing request for information. Today, the Internet Protocol Suite contains dozens of additional standards, but is commonly referred to as TCP/IP because of these two original protocols. Another protocol that's included is the Hypertext Transfer Protocol, also known as HTTP. HTTP is the standard for the exchange of HTML documents. In practical terms, web servers use HTTP to transfer webpages. Web servers run on hosts, which are computers or devices on a network. Hosts can provide information to each other across a network such as an application, documents, and other resources. Hosts communicate using a standard protocol such as those found on the Internet Protocol Suite, like HTTP. Each host can have an IP address, which is used to route requests to a specific computer. An example of an IP address is 188.184.67.27. If you're anything like me, remembering sequences of numbers is something that I like to delegate to computers. To give networking more semantic context, hostnames provide human-readable labels for a host that map to an IP address. That way, instead of trying to remember a website's IP address, you can just type in info.cern.ch to see the world's first website from 1990. That hostname maps to a single IP address, which we saw before. Something interesting to note, while each hostname can only map to a single live IP address multiple hostnames can have the same IP address. In fact, we'll be using this technique later in the course. To recap, the internet basically is a network of hosts or computers. Each host is accessible by its hostname, which maps to an IP. A host can send HTML documents over HTTP. So, how do webpages get served?

NOTLAR
Transmission Control Protocol (TCP) - transport
Internet Protocol (IP) - routing and addressing
Hypertext Transfer Protocol (HTTP) - webserver use it to send webpages
info.cern.ch (ilk web sayfası)

 

1.2 The lifecycle of a request

One of the job interview questions I've both received and given is what happens when you type a URL into a browser and press Enter? The answer can get pretty long and is intentionally open-ended. The point of this type of question is not to trap or confuse. It's to determine if the candidate has a broad understanding of how the web works. A high level perspective provides context for architectural decisions, troubleshooting and delegation responsibility. Let's start with the abstract for just a moment, then I'll relate the abstract with concrete examples to give relatable context. A client-server model is a distributed computer application structure with two main roles, client and server. The client requests data from a centralized resource, known as a server. The server then responds with the requested data. The World Wide Web as we know it is built with a client-server architecture, where a client requests data sent over hypertext transfer protocol or HTTP. A client makes a request for an HTML document from the server over HTTP, who builds a response to fulfill the request and sends it back to the client for presentation. It's a roundtrip that happens over and over again. A client makes a request from a server and the server sends a response. Then, the response is rendered by the client. Then the client makes another request. The server sends another response and so forth. That's really abstract, so let's take a closer look at the individual pieces. A HTTP client makes the request for data to a server, but what exactly is it? You are almost certainly using one now. The vast majority of the time, HTTP clients are web browsers with common examples of Mozilla Firefox, Google Chrome or Microsoft Internet Explorer. A browser is way more relatable than an HTTP client. Browsers make requests from a web server. A web server is a program that uses HTTP to serve files that make up a webpage, including HTML documents, images, stylesheets and so forth. There are a number of different web servers. The most common is Apache and NGINX is also becoming increasingly popular. We'll compare these two later in the chapter. Once the request is received by the web server, it decides how to handle it. There are two types of response. The first is static, which doesn't need to be generated and already exists. Static responses typically consist of files that already exist such as images, stylesheets or even HTML documents. If the web server finds the asset, it just sends it back, job complete. The other type of response is dynamic, which needs to be created on the fly. Dynamic responses are where the real fun is. A dynamic response is generated using an interpreter that executes scripts written in languages like PHP, Ruby, Node.js and others. With an interpreter like PHP, the dynamic response can be created completely independently of any other service. Here's a full HTML hello world. If we made a request directly to that file through the web server, we get a W3C valid HTML5 document, not particularly interesting, but valid. It's not really practical to store content in code, so it's more typical to use a database server for storing persistent data. Persistent data is information that doesn't change often like the text of a blog post. Database servers provide an interface for organized collections of data. PHP or another interpreter can connect the database server in order to store and retrieve data. PHP then uses the contents of the database to build the response such as rendering a blog post. A popular database management system or DBMS is MySQL. We'll explore MySQL and some other alternatives later on as well. With this context, let's reexamine the lifecycle of request, starting with a static request. A browsers sends a request for an image from a web server over HTTP and the web server finds the file and sends it back, also over HTTP, pretty straightforward. A dynamic browser has some more steps, but at a high level, it's pretty similar. A browser like Firefox sends a request for a page from a web server like Apache. Based on a rule like the name of the script, the PHP server gives the context to the request of the interpreter such as PHP. If necessary, PHP connects to the database like MySQL to get or set the data as requested. PHP then assembles the response and gives it back to the web server, which finally sends the response back to the client. The end user never actually directly accesses the database in this model. All requests go through PHP. To review, the components of the lifecycle of a web request are HTTP clients or browser, web server such as Apache, interpreter such as PHP and a database server such as MySQL. I don't think we need to worry about the web browser, but where should we be setting up the server components?

 

1.3 Where should I be developing?

Software engineers are faced with many technical challenges across projects. Not only do you have to deliver a functional product, but you need a place to build, test, and debug as well. Where's the best place to do web engineering work? Well, in information and communication technology, a common phased approach to environments used for software testing and deployment is known as DTAP which stands for Development, Testing, Acceptance, and Production. Each environment is similar, yet has a distinct purpose. Let's explore each in order with a focus on web application development. A development environment, or dev, is a working environment where changes to software are developed. Practically speaking, it's typically an individual developer's workstation. A dev environment includes the libraries and support software needed to run the complete application, test its functionality, and finally, debug its execution to find problems. When the developer believes their work is ready, the changes are copied to a test environment for verification that the work was completed. Optimally, this is a suite of automated tests. The two most common forms of testing are unit tests where smallest testable parts are individually and independently exercised to determine if they're ready for use, and integration tests where the individual software modules are combined and tested as a whole. After successful testing, the changes are deployed to an acceptance environment where someone verifies that the product works as expected. Depending on the organization, the customer themselves or quality assurance team member test the changes. For example, if the intent of the change was to make the sky blue and it's actually red, then that's a problem. Once the changes are accepted, the final stage is deployment to a production environment that the users of the application directly interact with. Production environments should not have any debugging and testing tools as they can have negative impacts on both security and performance. Instead, production environments should rely on logging in other forms of analysis to monitor the health of the application. The acceptance environment configuration and content should be as similar as possible to the production environment to provide the most relevant test results. To review, in a DTAP workflow, each new change goes through the same environments. It starts in development for building and experimentation, is copied to testing for automated verification, sent to acceptance for validation and acceptance, and finally deployed to production for use. Each environment should be isolated from each other for a variety of reasons. A developer testing shouldn't affect users of the system, experimentation should be isolated. Conversely, users shouldn't be able to access development or testing environments, which could include incomplete or unreleased features. Also, users definitely should not be using development or debugging tools. To that end, a proper development environment is not accessible by the public and any code changes made in that environment should not affect the rest of the team. In that sense, dev environments are optimized for flexibility and experimentation. Where changes and features can be built in safety. Therefore, the dev environment is the only place where code changes should take place. If you're feeling that pang of guilt right now because you've edited directly in production, well, you're in good company because I've done it as well and we've probably all done it. Just please don't do it anymore. Does it make sense to have a complete DTAP workflow for all projects? No, in fact it can be overkill for a lot of smaller work and experimentation. At the very least, there should be a separation of development and production. So, practically speaking, where should a development server go?

NOTLAR
DTAP: Development, Testing, Acceptance, Production
Development Environment: libraries, support software, test, debug
Testing Environment: Unit Testing, Integration Testing
Acceptance Environment: Customer, Quality Assurance Team
Production Environment: debug ve test araçları içermemeli, bunun yerine uygulamanın sağlığını kontrol etmek için loglama sistemi kullanmalı

 

1.4 Where should I put a dev server?

What is the best place for development server? For most PHP developers the answer is usually local on their own workstation. That was easy. Now the hard part. Why is local typically optimal? This is a nuance question and there's a number of different options to compare. The first options are web hosting services, which provide space on a server along with internet connectivity. Typically, they'll manage the server configuration, so all you have to provide is the application itself, and the data, and you just use their configuration, whatever it may be. Because they're supporting multiple customers, there's typically less flexibility for configuration. One size fits most. Of course, this isn't free. You are paying to delegate responsibility, which isn't always a bad thing. If you like, you can handle hosting yourself. Self-hosting is always an option. The primary advantage is that it lowers cost of third parties. It's also a lot more flexible in that you can basically configure your components however you wanted. However, now you have more overhead in maintenance and management. You've got to keep the servers up-to-date, fast, and secure. While self-hosting is a valid option, there are a lot of hidden costs in effort and time that can eliminate any potential cost saving. Is there a way to keep it cheap and flexible? Locally hosting can be the best of both worlds where you can have a full web application on your local computer or workstation. Instead of being potentially public, locally hosted content is only visible to yourself, which has its advantages and disadvantages. Since it's local you are responsible for managing the components, which gives you the ultimate flexibility. As you already have a computer, there's no additional acquisition or licensing costs. There are a number of reasons why local development is a great option. First of all, it's fast since every service is on the same machine. That means no network latency between you and the servers. It's also an unshared resource. So as long as you're not doing anything goofy in the background your application will run as fast as it can. Local development can be really flexible in that you can configure it to your needs. Want a particular version of a server? Install it. You can even work without internet, which is great for travel or even just getting out of the office. To that end local developments can be very portable, especially given that modern laptops have more than enough capabilities to be a great solution. In fact, I use laptops exclusively for development including authoring this course. Finally, local development is inexpensive, given that you already have the hardware. You don't have to buy or manage more equipment to be able to host locally. Fantastic. Is there any reason why development locally wouldn't be optimal? Let's look at the question from another perspective. Why develop remotely? Well, platform uniformity is actually really important, meaning having the same versions installed in a common configuration across components. Ever hear or use the excuse that it worked locally, but not in any other environment. Remote development can also make security and management easier through centralized controlled access to data and code. You're a lot less likely to leave a server on the subway than your laptop. Speed of deployments is also a factor. If everything is centrally located there's less to transfer and synchronize. For projects with gigabytes or terabytes of data deployment speed becomes extremely important. Kind of a sobering thought when you think about it. Is there no way to compensate? A lot can be done to mitigate the risks of local development. Starting with access, lock down your accounts, with a unique and secure password scheme. You can also enhance it with options like two factor authentication or a hardware key. You can also encrypt your hard drive, which will render it useless if your computer is stolen. Making a point to store only what's required for development and nothing else reduces the potential collateral damage in case of a breach. You should minimize or eliminate any personally identifiable information from your local environment. If you store everything locally with no redundancy it can be a single point of failure in the event of a catastrophic event like a fire, theft, or even just component failure. There are some more steps that you can take. Remote backups can save the day as long as you can make, maintain, and access them quickly. Configuration management is one of those problems that can be solved with documentation of what components are needed or with systems designed specifically for that purpose. We'll discuss some options for configuration management later in the course. Finally, the closer that you can get to mirroring the production environments configuration the better. At the end of the day the question to you is what's right for your project? There's no one right answer. It's best to know what you're options are and I've presented a few already. You should weigh the benefits and risks of any approach that you'd like to take, because it may be worth a higher upfront cost to reduce overhead later on. Even if you're enthusiastic about trying something new make sure that you have the consensus of the team that you're working with. Help make decisions that are best for the group, not for the individual. Trust me, exceptions slow everybody down. No matter what approach you take, you should focus on building something awesome that you can be proud of. How you do that is up to you. So why use Linux for your development server?

 

1.5 What is Linux and why should I use it?

I've been discussing web development and hosting in greater amounts of detail. Most web servers use Linux as an operating system and we're going to be using Linux in this course, as well. With that said, before you start using any tool having context about what it is and why it's being recommended will deepen your understanding about the system that you're using. Blindly repeating accents without being able to question or justify why, isn't learning. Therefore, let's explore what Linux is and why it's a good option for development. Let's step back, and answer a different question first. What is an operating system? In the purest sense, it's software needed to run programs on a computer. Common examples of operating systems include Microsoft Windows, and Mac OS. An operating system manages the computer's hardware as an intermediary, meaning programs only need to know how to talk to the operating system not how to work with the CPU, or other peripherals. Additionally, an OS provides common services such as user interactions through a graphical user interface or a command line. They also have the ability to manage programs by performing tasks such as installing, running, stopping services, and so forth. With that context, Linux is a free and open-source operating system. That does not necessarily refer to the price of the software. To quote the free software foundation think of free, as in free speech. Not as in free beer. More on that in a moment. Linux is modeled on Unix which is a family multitasking, multiuser operating system that has been around since the 70s. Linux is available on pretty much any computer hardware platform. Young and old, off the shelf to hand built. In fact, Linux is the leading server operating system in the world. And that's no small feat. I mentioned free and open-source software just a moment ago as that's a core value that has led to the development and spread of Linux. Free and open-source software is a movement by the GNU project. Yes, I'm pronouncing that correctly. Free is an interesting term, in this context, it means the freedom to copy, edit, and distribute both the source and the program itself. This empowers users to change the software to fit their needs. The ultimate in flexibility. Open-source explicitly refers to the source code that is available to read, and modify. The advantage of open-source software is that it takes a collaborative process of a group of people to develop and maintain software. Sounds good in concept, but why does it matter? Well, collaboration results in a much better product than the efforts of an individual or a group with only one perspective. Also, the free aspect can minimize or eliminate the direct financial costs through the use of core software. As someone who works almost exclusively with open-source technologies, I can tell you that the true cost of free and open-source software is time. What's your time worth? Free and open-source software also offers a wide variety of quality software for you to use that is peer created, reviewed, and designed to be built upon. This allows you to focus on developing your own core product, rather than reinventing the wheel. Finally, this freedom also has fostered a large community support base both online, and offline. There are books, wikis, forums, chats, meetups and conferences all of which are for sharing knowledge. In short, Linux is an excellent and viable option for a server operating system and can be used freely for both commercial and non-commercial work. All right, so how do I get started with using Linux?

 

1.6 Choosing a Linux distribution

Saying you use Linux is like saying you're using a car. There isn't just one Linux. The term actually encompasses hundreds of variations known as Linux Distributions. A distribution is a combination of software that adds functionality to Linux. Distributions are typically packaged for specific purposes. For example, a desktop distribution intended for end users would typically include a graphical user interface and user applications like an office suite and internet browser. In comparison, a server distribution typically has no graphical user interface, comes with no productivity software, and is easy to configure for a specific purpose like a web server. Some Linux Distributions are completely community based while others have a commercial component in the form of support or custom development. Let's compare some of the most popular and well supported distributions. openSUSE, Fedora and Ubuntu which we'll use as the basis of our server. openSUSE is the fifth most popular distribution according to DistroWatch which is a website dedicated to news and popularity ranking around Linux Distributions. It's intended to be easily accessible both in terms of acquisition and use. Variants have been developed since 1994 when it was formally known as SUSE Linux. openSUSE is sponsored by SUSE Linux GmbH and other companies. In 2012, openSUSE drew criticism from principle Linux architect Linus Torvalds for usability issues, especially around security policies requiring root credentials to perform every day operations. The specific issues raised were fixed later that year. Fedora is a general purpose Linux Distribution and the seventh most popular Linux Distribution according to DistroWatch. Fedora has been developed since 1995 when it was known as Red Hat Linux. Fedora has been split into community and commercial editions Fedora has a short life cycle of only 13 months per version which can be disruptive when trying to rely on support for a specific version. In comparison, Ubuntu is focused on features and support making it very easy to use and build upon. Ubuntu is a fork or derivation of Debian, another very popular distribution with a long legacy. Ubuntu is the fourth most popular distribution according to DistroWatch. It's sponsored by Canonical which makes money selling support options for it. Ubuntu has a very regular release cycle of every six months. Sound short, well Ubuntu also has a long term support version available with at least two years of support. So, why Ubuntu? Ease of use, community and commercial support, duration of supported versions, and popularity are among the major factors that guided the decision to use Ubuntu for a development server. Given all this context, it should be a bit clearer about why Linux is well suited as a web development server. Linux Distributions are purpose built with roles such as desktop or servers including web development. There are many tools already built in or readily available for installation to extend Linux. Linux is easy to configure for specific purposes, while in comparison, your existing operation system may feel a bit like a square peg in a round hole. Not a good fit when you tried something that it wasn't designed for. Linux and Ubuntu specifically have great support in the forms of documentation, knowledge bases and communities. Chances are, you weren't the only person without problems so look around if you have a question. And finally, best of all, Linux is free for collaboration, building and sharing. I can't think of a better foundation for building web applications. So, where we going to install Linux?

NOTLAR
Bir dağıtım, Linux'a fonksiyonellikler kazandıran yazılımların bir kombinasyonudur
distrowatch.com

 

1.7 Introducing virtualization

Linux is really useful but it needs to be installed somewhere. Most people, including myself, don't have a spare computer lying around to use as a development server. This is where virtualization and virtual machines become incredibly useful. A system virtual machine, or VM for short, is the complete software emulation of a physical computer system. Meaning, it's functionally equivalent to do whatever it's acting like. A VM allows an entire operating system to run normally within it, and an operating system thinks that it's installed on a regular computer. A VM typically emulates an existing hardware system, such as the architecture found in desktop PCs. Using the software, you can run multiple operating system environments within VMs on a single computer, even simultaneously. This allows you to do things like run a copy of Windows 10 within a VM on Mac OS, which can be incredibly useful for testing, or using, platform specific software. A system virtual machine is like a matroyshka stacking doll, where a smaller doll is within a bigger doll. What manages the virtual machines? A hypervisor, also known as a virtual machine monitor, creates and runs virtual machines, allowing managing of a VM like any other program that runs in your OS. A hypervisor also includes controls for configuration and built-in status monitoring to show the health and activity of a VM. An example of hypervisor is Oracle VM VirtualBox, which we're going to use to manage our virtual machine. To review, a hypervisor manages virtual machines running on a computer. Let's talk about the relationships of the components within a VM. Now, a host machine is the computer on which a hypervisor is running, which provides computing resources such as processor, disc space, and memory. Practically speaking, in our case of our virtual server, it's your existing computer. This distinction is necessary when it comes to some of the configuration we'll be doing. In contrast, a guest machine, also known as a virtual machine, is an independent and distinct instance of an operating system, software, and data within a host. A guest uses the host's computing resources, like the processor and so forth. One of the neat things about virtual machines is that multiple instances, or guests, can share host resources at the same time. They can all coexist on the same computer, yet work in isolation so they're not interacting with each other, unless that's what you want. Let's see how the various components relate to one another. We'll start with the hardware, which can be a computer like a desktop or a laptop. The host operating system, which can be something like Mac, Windows, or even Linux, runs on the hardware. A hypervisor, such as VirtualBox, runs as a program within the host operating system, allowing a guest virtual machine to use resources through the host and hardware. Multiple guests can coexist in harmony with each other. I use this particular visualization on purpose. As the amount of available resources does decrease from the computer down to the guest VM. So, with the context of what system virtual machines are, I can describe Oracle VM VirtualBox in a bit more depth. It's a free, open-source hypervisor which means anyone can use it. Of course, enterprise features and support can be licensed from Oracle, but these features aren't necessary, or even appropriate, for our context. VirtualBox is very popular amongst hypervisors, due in no small part to its ease of use for managing VMs and stability. In fact, VirtualBox is used by other open-source software to host and configure specialized VMs, which can be useful for team collaboration. We'll explore that later in the course. Finally, VirtualBox is unique in its portability where VMs can be shared across platforms. So a VM created in Windows will actually work on a Mac. Depending on your needs, this can be really useful. So, can we just install Linux onto a VM and all of our problems are solved? Not exactly.

NOTLAR
Host: kendi kullandığımız bilgisayar
Guest: yarattığımız sanal makine

 

1.8 What's a LAMP and why does it matter?

We're using VirtualBox as a hypervisor, so we have a place to install Linux, but what should be running on Linux? A LAMP, of course! But, what is a LAMP and why does it matter? Well, a LAMP is what's known as a solution stack, which is a computing term for a set, or a group of software, that has been selected to perform a specific task without any outside dependencies. LAMP is specifically a web server solution stack that serves dynamic, database-driven, high-performance websites using only free and open-source software. Sound familiar? The LAMP solution stack is incredibly popular among web hosts, and almost half still use it. The word LAMP is actually an anacronym. It starts with Linux as the operating system to run the software. Then, Apache HTTP server as the web server itself. MySQL is used as the database management system, and finally, PHP is the scripting language that builds the dynamic web pages. So, why this particular combination of software? Linux distributions are purpose built to be whatever they need to be. Want just a server? There's hundreds of options. Also, most distributions include some sort of package management. A package manager easily installs software from a repository with a single command. This makes for really fast and straightforward software installation without having to deal with dependencies and other required software. Most of the packages include preconfigured software ready to work out of the box, which is time-saving as well. The nice thing about software is that there's usually a choice, and you don't actually have to use Linux for a web server solution stack. You can use Windows for a WAMP, or a Mac OS for a MAMP, however, the version most people'll use are desktop operating systems, which are built for a specific purpose. They're decent for local development, and MAMPs especially. However, a desktop is not suited for production environments, it's just the wrong tool for the job, same reason why you wouldn't wear sweatpants to the office. The Apache HTTP server also has some fantastic advantages. It's very robust, meaning it can deal with large amounts of traffic. Apache is also flexible, capable of serving many different kinds of content out of the box. As a result, it's the most popular web server, over 45% of the web is served by a version of Apache, according to the March 27 Netcraft Web Surver Survey. It's also relatively easy to configure, the flexibility does come at a cost of complexity. With that said, there's comprehensive and well-maintained documentation that will guide any server administrator to success. Apache is by no means the only web server out there. Nginx changes the A to an E, at least phonetically, for a LEMP. Nginx is an extremely lightweight and fast web server and it's a reverse proxy server, able to handle extremely large amounts of traffic. There's a quote from Chris Lea that sums up nginx quite well: "Apache is like Microsoft word, it has a million options, "but you only need six. "Nginx does those six things, and it does five of them "fifty times faster than Apache." MySQL is an open-source relational database management system, and the second most popular database in the world, second only to Oracle. MySQL is stable, extremely fast, versatile, and flexible, and scalable from small projects with a few dozen records, to enterprise-scale databases with millions of rows per table. MySQL is also very well supported, both online and offline, there's no lack of documentation. Of course, MySQL is not the only database. MariaDB is actually a fork of MySQL, made by a number of the original authors over concerns over the commercial influence from Oracle. Currently, the two products are compatible with each other. Another option is PostgreSQL, for the slightly awkward acronym, LAPP. PostgreSQL is an object-relational database management system, offering support for objects, classes, and inheritance, similar to Oracle's database. It's more secure and stable, but at a cost of scalability. Depending on your needs, it may work well. That's a lot of options, isn't it? Let's review exactly what we're going to be installing. For Linux, we'll be using Ubuntu server 16.04 long-term support. The web server will be using Apache HTTP Server version 2.4. The database will be using MySQL version 5.7, and PHP will be on version 7.0. A LAMP contains all the major components of a modern web stack, and it's flexible and extensible for application development. In this chapter, we've been answering a number of questions. I started with networking fundamentals, such as the protocols used on the internet, and terms such as host to give context for the entire course. Then, we followed the life cycle of a request, from the browser, to the web server, and back. We also discussed the differences between environments, and the advantages of their separation, answering the question, where to develop, the answer being a purpose-built development environment. Next, we evaluated different options for locating a development server and provided context about why local development is generally recommended and separate from production. We explored what Linux is, a free and open-source operating system, and why it makes sense to use a purpose built system for a specific need that happens to be free. We surveyed a number of Linux distributions and compared their characteristics, and then chose Ubuntu for this course. I introduced the concept of system virtual machines, with the example of VirtualBox, which can be used to host a guest operating system within your existing operating system. Finally, we defined what's in a LAMP: Linux, Apache, MySQL, and PHP, and some of the variations on the web solutions stack. At this point, we've got the context of what and why, let's move onto the next chapter, where we'll create a virtual machine using Oracle VM VirtualBox.

 

2. Creating a Virtual Machine

2.1 Preparing your workstation

In this chapter, we're going to create a guest virtual machine using Oracle VM VirtualBox, the virtualization project we're using to manage your private development server. By the end of the chapter, we'll have a running Linux server ready to be configured. Let's start by preparing your workstation to host the server. The first step is to download and install VirtualBox. This course uses version 5.1.22 which at the time of this recording is the most current version. Historically, this course is for it's compatible with newer versions, so they should work without issue. On the download page, you'll see the term host. This refers to your current operating system. From a browser, go to VirtualBox.org/wiki/Downloads. At the top under VirtualBox binaries, choose the host that matches your operating system, Windows or Mac. If you want to use the exact version that was used in this course, use the VirtualBox older builds link. We're using version 5.1 and specifically VirtualBox 1.22, and I'm using it for 10 hosts. When the download is complete just run the installer. Click Continue, Continue, and Install. It'll ask for an administrative password. This is normal just type it, and it's installed. No other configuration is required. Click Close. You will also need to download the Ubuntu Linux Server installation disk image. There will be a couple options. We'll be using the Long-term support version known as LTS. Within that, there are two variants, 64 bit and 32 bit which refers to how the CPU handles information. Modern operating systems are by default 64 bit, so I recommend the 64 bit version. If you have a much older system or you're not sure, you can use the 32 bit version. It's guaranteed to work on all systems, but it will be a bit slower. You don't need anything else with this file right now just remember where you downloaded it. To get Ubuntu, open a browser, and navigate to Ubuntu.com/download/server. For Ubuntu Server 16.04.2 LTS, click the large Download button. Finally, we need to create a folder that will be shared between the host and guest systems. This is where your development files and projects are going to go. We're going to name it sandbox which in computer science refers to a testing environment that isolates untested code changes from the public. It's also a short name that clearly indicates the purpose of what we're doing. Choose a location that is convenient and easily accessible such as the desktop, and create a folder named sandbox. As the file systems in Linux are case sensitive make sure it's named in lowercase to avoid problems later on. At this point, we should be ready to create the Virtual Machine to house our Linux server.

NOTLAR
Bu bölümde VirtualBox'ı indirip kuruyoruz
Ardından Ubuntu'nun güncel versiyonunu indiriyoruz
Guest bilgisayarda kod çalışmaları yapmak zor olacağından Host bilgisayarımızda bir klasör oluşturuyoruz. Bu klasörü daha sonra Guest bilgisayarımızda oluşturacağımız klasöre bağlayacağız. Böylece doğrudan kendi bilgisayarımızda geliştirme dosyaları ile çalışacağız. Ama bunlar Guest bilgisayardan erişilebilir durumda olacak

 

2.2 Creating the virtual machine

It's time to create the virtual machine that will house our Linux server. Let's open VirtualBox. Other than some minor visual differences, all the steps are identical for Mac and Windows. This is the Oracle VM VirtualBox Manager interface. At the top there are a number of controls for managing virtual machines. New will create a VM, Settings is for configuration, Discard deletes a VM, and Start will boot a VM. Underneath on the left is a list of all the virtual machines on your system, which is currently empty. We're going to have to do something about that. Click New to create a new virtual machine, which will bring us to a wizard that will walk us through the VM creation. First we'll specify a descriptive name to identify the new virtual machine. I'm going to set the name to Sandbox. The type of the operating system that we plan on installing, which in our case is Linux with the version of Ubuntu 64 bit. Specifying the type and version doesn't actually install the operating system, but it does set the label and icon. Click Continue. The guest virtual machine consumes resources from the host, and the maximum available amount of memory, or RAM, needs to be specified. A gigabyte of RAM should be fine for most situations. It can always be changed later. When you are set, click Continue. Every computer needs peristent storage for files and programs, so we're going to need a virtual hard drive. Practically, a virtual hard drive is just one large file that is stored within your host. This is nice because there is no need to partition or format anything on your host. With that context, let's create a virtual hard drive. Make sure Create a virtual hard disk now is selected, and click Create. This will open another wizard. We've got a couple options for the format, which is useful if we wanted to share the VM with other virtualization software, but in our case, no need. Just stick with the default, The VirtualBox Disk Image, or VDI. Click continue. This step is for the storage of the virtual hard disk, and there are two options for how the file is actually physically stored. It can be dynamically allocated, meaning that the virtual disk will consume space on the drive as it fills up. This is a one-way path, meaning that if you fill it up then remove files, it won't shrink automatically. It is also a bit slower at first until the size stabilizes, so I only recommend this approach if you have a very small hard drive. The other option, fixed size hard drive files, allocates, or takes space up front, and the results are typically faster. If you know about how much space you need, definitely go with fixed. You can always resize later if needed. I'm going to choose fixed size, and click Continue. The virtual hard disk file needs to be named and placed on the hard drive. By default, the name is already set to the name of the VM, Sandbox, and that should be fine. If you want to change the location of where the file is physically stored, click the folder icon over on the right. By default it's stored in a folder under VirtualBox VMs. I'm not going to change anything, so I'll click Cancel. The final question is about the size of the virtual hard drive. The default, 10 gigabytes, will be plenty. When you're ready to create the virtual hard drive, click Create. This will take a few seconds or minutes depending on the speed of your computer. When complete, we'll be taken back to the manager. On the left the newly created Sandbox VM is visible. At this time the VM is powered off, which is the same kind of term you would use for an actual computer. Over on the right there are details about the Virtual Machine configuration grouped by function. We're going to make a few changes to how the VM is configured to optimize it for development.

NOTLAR
2 GB RAM
10 GB HDD (fixed size)

 

2.3 Optimize the VM configuration

Now that the VM has been created, let's optimize it for local development. To change the configuration of a VM, select it in the list of VMs on the left then click the settings icon. No changes are needed on the general category, so let's go to this system category, which are settings related to the basic hardware presented on the virtual machine. Go to the processor tab. To improve performance, let's increase the number of processors from one to two CPUs. Under extended features, click enable PAE/NX, physical address execution, which Ubuntu server can use. Now that the system is configured, we'll go to the storage category to manage virtual hard disks, optical and floppy drives. This storage tree on the left lists the controllers and devices connected to them. To install Ubuntu, we'll need to virtually put the downloaded disk image in the virtual drive. Under Controller: IDE, click the row with a disk icon to select. To the right of optical drive, there's another disk icon. Click it, and go to Choose Virtual Optical Disk File. Browse to where you downloaded Ubuntu, select it, and click open. We're done with storage, so let's configure audio next. We don't need audio, so uncheck Enable Audio. After audio comes the network category. This step is very important, as we're going to need to be able to have bidirectional connections to the VM to access webpages and so forth. VirtualBox's default networking mode is network address translation, or NAT for short, where VirtualBox's networking engine maps traffic to and from the VM. In NAT mode, by default, the guest virtual machine is unreachable from the network, including from your host on your computer and browser. Instead, VirtualBox NAT uses port forwarding, which is where NAT listens to networking traffic from one address and port and resends that traffic to a different address and port, such as from the host to the guest. This configuration is bidirectional, so traffic can go both in and out. By default, VirtualBox doesn't forward any ports, which will make it impossible to access the server. And that's not very useful, but there's a practical reason for it. If your host can be accessed via a particular port on a network and you're forwarding it, then your guests can also be accessed. Don't worry, as modern versions of both Windows and Mac come with a secure firewall that will block outside access if it's enabled. As a best practice, ensure that you have a firewall of some sort that is enabled and blocking outside traffic. Use common sense and utilize best security practices, minimize risk and protect your work, just like you would with any other program. You'll be fine. We're going to forward traffic to four services necessary for managing a local development server. To avoid any potential conflicts with services that you may already be running, we're going to forward them to nonstandard ports. This is not a replacement for security if you're not using a firewall. Security through obscurity is a delay tactic. For each service, we'll start with the default on the guest and specify the new port on the host. The first will be for HTTP, which will allow webpages to be searched from Apache to a web browser. The default port for hypertext transfer protocol is 80 and that's the port that Apache runs on in the guest. We're going to use port 8080 for the host port. The second rule will be for the MySQL database which will allow direct connections to the MySQL server for debugging, imports and so forth. Normally, this wouldn't be exposed to anything except the web server, but this is for development, so it's okay in this case. The MySQL default port is 3306, and we'll forward it to 9306 in the host. The third rule will be for MailCatcher, an opensource development utility that runs a simple receive only mail server and displays the emails in a web interface. We're going to be using MailCatcher instead of an email server to reduce the risk of accidentally sending emails. The MailCatcher interface default port is 1080, and it's uncommon enough that we can just forward it to 1080 on the host as well. No changes necessary. The final service will be for SSH or secure shell, which we'll use to manage the server from the command line. I will discuss SSH in much greater detail in the next chapter. SSH runs on port 22 by default, and we'll forward it to port 2222 on the host to avoid any conflicts. Right, that's enough context. Let's configure the VM so we can access its networking. Back to the network configuration, under the Adapter 1 tab, click advanced, then port forwarding. We're going to forward a total of four ports to avoid potential conflicts with software already running on your computer. Remember, host is your local work station and guest is the virtual machine. Click the plus icon to add a forwarded port. This will add a new blank rule. Click a value and start typing to change it or explicitly double click. Our first rule will be named Apache, which will allow us to get web pages from Apache using a browser. The protocol can stay TCP and the host IP can be blank for any IP. We're going to set a nonstandard host port to prevent any potential local conflicts. Specify 8080 as the host port. In the guest port, specify 80, the default HTTP port for web traffic. Click plus to add a second rule for the database called MySQL. This will allow direct connections to the MySQL server which is extremely useful for debugging, imports and so forth. We'll specify port 9306 for the host port and 3306 for the guest. Click plus a third time for MailCatcher, which we'll use to debug email. Specify a host port of 1080, and a guest port of 1080 as well. Click plus a final time for SSH, which we'll use to manage the server from the commands line. Specify a host port of 2222 and a guest port of 22. When done, click OK. Finally, we'll set up shared folders to easily exchange data between the VM and the host. Remember the sandbox folder on the desktop? That will be shared. Click on the plus folder to the right of the folders list. For the folder path, go down to other, and then browse to the desktop, sandbox, and click open. The folder name will be set to sandbox, which is fine. We want to be able to change files from within the guest, so leave read-only unchecked. However, we do want this folder available every time we start the server, so check auto-mount. Click OK to add the share then OK to save the configuration changes. That might seem like a lot of configuration, but it's a one time job. We're ready to turn on the virtual machine and install Ubuntu.

NOTLAR
System > Processor: 2 processor, Enable PAE/NX (physical address execution)
Audio: Audio kullanmayacağız
Network > Adapter 1 > Advanced > Port Forwarding
HTTP: Host 8080 Guest 80
MySQL: Host 9306 Guest 3306
MailCatcher: Host 1080 Guest 1080
SSH: Host 2222 Guest 22
Shared Folders: Host bilgisayarımızda oluşturduğumuz klasörü ekleyeceğiz

 

2.4 Installing Ubuntu Server 16.04 LTS

Currently, we have a configured virtual machine but no server. Let's install Ubuntu Server 16.04 LTS. We'll get started by selecting Sandbox over on the left, then clicking Start to turn on the server. The virtual computer will turn on. As this is the first time, a warning will appear. This is completely normal. When you click on it, it says the auto capture keyboard mode is on. What's that? Auto capture keyboard means that whenever the virtual box VM window is activated, or active, anything you type will go directly into the VM as if you were using that machine. That's actually exactly what we want, not forever. To exit, press the host key, which is a reserved special key to return ownership to the host machine. On Mac, the host key is the left Command key. On Windows, it's the right Ctrl key. The end of the message describes what the host key is currently set to. You can close the message by clicking the X or tell VirtualBox never to show that message again by clicking the far right icon. VirtualBox has already booted the mounted Ubuntu disk image to the boot disk menu, which will ask which language to use. English is the default and what this course will be demonstrated in. Choose the language you'd like and press Enter. The boot menu has a number of options, including installing Ubuntu. Before pressing Enter, let's specify a version of Ubuntu that is optimized for virtualized environments by changing the mode. If you're on Windows, press F4. On a Mac, press Function + F4. A list of available modes will be shown, including normal, the default, and at the bottom, install a minimal virtual machine. Navigate with the arrow keys down to the bottom option, and press Enter. Now, press Enter to install Ubuntu Server. The Ubuntu installer wizard has started and it's triggered another message about mouse pointer integration at the top. Mouse pointer integration is related to auto capture keyboard. As Linux supports this feature, there's no need to capture the mouse input. You can just use your mouse over the VirtualBox VM window, and it'll automatically use the input. The neat thing about this is we don't need to worry about the host key when in Ubuntu. Back in the installer, you can close this message by clicking the X or tell Virtual Box to never show this again. Once again, you'll need to select the language, but this is the language to the Ubuntu installer, not the Ubuntu menu. Sorry, I know it's redundant. English is what I'm using so I'll press Enter. The next option will be the location for the time zone locale. Using the arrow keys, choose whatever option is most appropriate for your needs, including navigating down to the bottom to other if your country, territory, or area isn't listed. I'm in the United States, so I'll just press Enter. After the locale is the keyboard configuration. If you have a standard US keyboard, this'll be really straight forward. I'm going to say no to detecting the keyboard layout, and choose the particular country of origin for my keyboard, which is English US, and press Enter. Again, there are a number of options for the particular keyboard layout. I have a standard English US keyboard so I'll just press Enter. Ubuntu will load a number of drivers, and start configuring the hardware. After a bit, we'll need to configure the network. The first question is regarding the hostname. A hostname is a human-readable nickname assigned to a device on a computer network. A hostname is used to translate human-readable requests into network-readable IP addresses in order to route a request. We'll explore hostnames and networking in greater detail in the next chapter. In the meantime, we're going to specify a hostname setting in the installer in order to identify the guest on our network, and to set defaults in the operating system. Back in the installer, let's set the hostname. Instead of the default let's press Backspace to remove Ubuntu, and instead, type Sandbox.dev. Press Enter to continue. Now that the network is configured, we're going to set up users and passwords. Before we setup anything, let's take a look at user accounts in Linux. In Linux, for administrative purposes, there's a special superuser account named root who can do anything they want on the system. With great power comes great responsibility. If you're not careful, a single typo can break the entire system. For this, and other security reasons, by default, the root account is locked in Ubuntu, meaning you can't login directly as root. Okay, so how can you run administrative commands? Instead of running programs as root, you'll need to create a separate user account for yourself that has permission to allow authorized users to run programs as root. As the root account is a superuser account, the command used to run programs with security privileges of another user is called sudo, which can be remembered as superuser do. Your user account will have permissions to use the sudo command, which will give you administrative privileges. Two names are actually required. A full name is used for defaults throughout the system, including programs needing a name, like an email server. The username is used for logins, email addresses, and so forth. There's some best practices for usernames to keep in mind. Usernames must start with a letter. Any remaining characters should be alphanumeric, meaning letters between a and z, and digits from zero to nine for compatibility across most systems. Usernames should only be lowercase to avoid inconsistency across systems that do and don't support capitalization. With this context, let's create your user account now. Let's start with a full name. I'm going to type Jon Peck as my full name then press Enter. We're going to have to specify a username, which will be used for authentication and so forth. Your first name in lowercase is a good default, or if you have a particular nickname that you're used to specify that. I'm going to use jpeck, as I've used that across many systems. Then press Enter. We'll need a password for the username. Use something you can remember that is strong enough to prevent someone from guessing it if they got access to your workstation. If you use something too weak Ubuntu will warn you to enter a stronger password. I'm going to use, nice try, I'm not going to tell you my password, nor should you tell your password to anybody else. If you'd like to see the password that you're typing, press Tab to move the cursor, then Space to select show password in the clear, then Tab until you get back to the choose password. When you type secret it'll show. I'm going to hide it again, so Tab, Space, Tab, and Tab, Tab, and then we're back to the password. I'm going to type my password and press Enter, then type it again to verify. It's warning me that I'm using a weak password. I will say yes I know what I'm doing because I'm doing this for a demonstration. I don't recommend that you do this. The final question for user setup will be whether or not to encrypt your home directory for additional security. Given that this is a local development environment, we want the system to be fast as possible, and rely, in part, on the physical security of the host. So saying no is safe. So press Enter. The clock configuration will attempt to auto-detect the timezone and usually it's pretty accurate. If not, choose a better option. Los Angeles is close enough so I'll say yes and press Enter. Disk partitioning and management can be a course topic unto itself. Briefly, it's how the file system is configured. Fortunately, Ubuntu includes some sane defaults, so "Guided - use entire disk" will be fine. So press Enter. Ubuntu will ask which disk to use. Notice that it says VBOX as this is the VirtualBox virtual hard disk. There's only one option so press Enter. Ubuntu will verify that we wish to erase the selected disk, which, in this case, is the virtual hard disk we created when we made the VM. This is a good sanity check if it was a real computer, not just a virtualized environment. We're safe, so press left to move the cursor to Yes, and press Enter to partition the empty drive. Ubuntu will now partition, format, and install. This may take a few minutes depending on the speed of your computer. Now that the base system is installed, we're going to configure the package manager, which is the system for downloading and installing programs. Our virtual machine doesn't have a proxy to connect to the network, instead relying on the host's networking, so blank for none is fine. Press Enter. Apt is the name of the package manager, and it will retrieve a list of available software and versions then perform some upgrades that have come out since the installation disk was created. We'll discuss apt in the next chapter. How do we want to manage upgrades on the system? Installing security updates automatically is really straightforward and makes life easier, so press down then Enter to select. Currently, all we have installed is Ubuntu. This is a great platform for installing software. Let's choose a few things that will be absolutely necessary. Press down until the cursor reaches LAMP server for Apache, MySQL and PHP, then space to select. Keep in mind, installing the software doesn't mean it's ready to use. There's more configuration ahead, and we'll go through it together. Finally, press down until the cursor reaches OpenSSH server for remote administration, and press Space. When the software has been selected, press Enter. We'll be asked for the password for the root, or administrative user, for the database. I'm going to use the password, root, as this is a local development server, but in any other context, and especially a public context, use something randomly generated and secret. Don't leave it blank or it'll be very difficult to reset. Press Enter, then repeat the password again to verify. The additional software will install and configure itself with logical defaults. We're getting close to the end of the installation. Grub is a boot loader, or a helper, for booting the operating system. We can confidently say that, yes, installing to the master boot record of the virtual hard drive is safe and fine because nothing else is installed. Press Enter. Congratulations, the installation is complete. When I select Continue, Ubuntu will eject the installation disc. However, since VirtualBox is providing virtualization, VirtualBox will actually just unmount the Ubuntu Server install disc. Press Enter to continue. After a few moments, the system will boot into Ubuntu Linux. Assuming that there were no problems, you should be presented with a login prompt. Very cool, our system's ready for configuration. We're not actually going to be using the VirtualBox window to configure this server so you can minimize it. So, how do we connect to administer, the local development server? We'll find out in the next chapter. In this chapter, we prepared our workstation by installing VirtualBox, downloading Ubuntu, and creating a shared folder. Then, we created the Sandbox virtual machine, and defined what kind of host resources it would use. After it was created, we configured the guest virtual machine to disable unnecessary hardware options and forward networking ports. When the VM was ready, we installed Ubuntu Server, and learned about hostnames, user accounts, and some other core concepts. Now that we've installed Linux, let's connect to the server's command line interface over the network and start configuring.

 

3. Managing the Server from the Command Line

3.1 Talk to yourself with local networking

We've got a fully functional operating system with some base software, but how are we going to connect with it for configuration? Sometimes it's easiest just to sit down and have good chat with yourself to work out a problem. Sounds silly doesn't it? Well, there's a practical reason for it. You listen to yourself faster and better than anyone else. Internet networking is remarkably similar to that. In this chapter we're going to learn how to remotely manage the server from the command line. By the end of the chapter we'll be navigating and administering the sever over the local network. Back in chapter one we discussed a number of networking fundamentals including hostnames which map to a specific IP address. There's a special host name Localhost, which is a really fancy way of saying this computer. The computer you're currently on is your localhost. When you connect to local host you're accessing your own network services via loopback which just routes traffic back to its source without processing or modification. The Hostname is localhost which is easy to remember, and it has a dedicated IP address. 127 dot zero dot zero dot one which will always map to your own computer. What can you do with localhost? Well, web servers have a number of uses for hostnames. Every HTTP request made by your browser will include the hostname of the target server. As a raw example, here's a get request for the HTML document at the info dot cern dot c h route using the HTTP one point one protocol. The hostname is info dot cern dot c h. Web servers can serve different content depending on the hostname which allows the same sever to serve content from multiple hostnames. Commercial hosting providers do this quite often to save resources, and we'll be using this technique with localhost in this course for our own local development server. We're going to configure our local networking to map requests to the custom hostname sandbox dot dev to a localhost. That way we'll be able to access our server using the browser and other clients by using our custom hostname. We'll also be using that context of that hostname in the configuration of the web server. How can we make that mapping? Every operating system with networking has what's known as a host's file which is just a plain text file used by the operating system to manually map hostnames to IP addresses. Think of host files like a networking address book as it's the first place that is checked to see what the IP address is for a particular hostname. Host files are useful because they can be used to locally define any hostname that you want which will override any external mapping, because it's being checked first. The format of host files is really straight forward. Just lists of an IP address and a hostname. For example 127 dot zero dot zero dot one and the host name localhost maps localhost to 127 dot zero dot zero dot one. When we edit the host file in a moment, we'll see an entry like that already in place. Within the context of hostnames and server interactions, there are a number of reasons why host files are useful for development. For one, they're incredibly fast. There's virtually no latency or delay when connecting to yourself, because the request doesn't have to travel anywhere. Because of that no internet connection is required if you're using a local host or IP or hostname. You can use practically any domain name you want, because the mapping is local. It won't affect any other computer. Finally, host files allow you to access projects by logical names rather than just localhost or an IP address. Fast, convenient, and free, those are some good reasons. We're going to be mapping a development hostname to localhost which is this computer. I'm going to be using sandbox dot dev as it sounds like a regular domain but is explicitly not a real domain. To do that we'll need to edit the local host file and add just one line. The line is the same in both Mac and Windows. The IP address 127 dot zero dot zero dot one followed by a tab and the host name sandbox dot dev. I'll demonstrate how to edit the host file on a Mac then demonstrate how to do it on Windows. Once that's complete, you'll be able to connect to the server to configure it. On a Mac the host file is located in et cetera hosts. We're going to use the commands that I will describe in much greater detail later on in the course. The fastest and easiest way to edit this is to use the terminal which is available if you go into the finder to Go, Utilities, and Terminal. We're going to use the command Pseudo which means super user do, then nano, a simple text editor, followed by the name of the file to be edited. Et cetera hosts. It'll ask me for my password which I'll type, and then once it's open navigate with the arrow keys by pressing down until you get to the end. Then type 127 dot zero dot zero dot one, tab, and then sandbox dot dev. When ready press control x to exit then y to save, and enter. For Windows the host file is located in system root, system 32, drivers, et cetera, hosts. They've hidden it pretty well. The easiest way to edit the file is to run Notepad as administrator. From the start menu, type Notepad, then right click on it, and go Run as Administrator. Say yes to give permission. And go to file, open, and then type percent, system, root, percent slash system 32, slash drivers, slash et cetera, slash hosts. Then click open. At the end of the file we're going to add the line 127 dot zero dot zero dot one, and tab sandbox dot dev. Go up to file and save, and then we're done, it's safe to close. Now that we've configured the hostname for our development server to connect to localhost, we can now remotely configure the development server.

 

3.2 Logging in using Secure Shell

When managing servers, it's important to be able to securely perform remote work without risking your credentials or data. One of the most common methods is to use a secure shell. Secure shell, or SSH, is an encrypted network protocol between two computers across a network. The encryption provides secure data communication to allow work from potentially unsecure or untrusted networks. Even if you're feeling safe at home or at your office, there's a lot of steps across many networks between you and whatever server you're communicating with. I'm not trying to scare you, just teach you to mitigate the risk. One of the things SSH is used for is a remote command-line login with text interactions, no graphics. The remote command line is most commonly used to manage servers. Another purpose for SSH is remote command execution, which is useful for quickly executing one-off commands such as starting or stopping a process. SSH is another example of client-server architecture. A SSH service runs on a server and performs the encryption and authentication. An SSH client can connect to the service and identify itself using special credentials. How does SSH secure itself? SSH uses a secure algorithm known as public-key cryptography to encrypt communication. The way it works is by generating two linked keys: a public key which is shared with target servers that you want to interact with, and a private key that is kept secret and secure, such as only on your computer. Practically speaking, the public key on a remote server is used to encrypt communication and authenticate a private key. On the other side, only someone with a private key can decrypt and read the result. What's interesting about this technique is that the key that encrypts data is not the key that decrypts it. So even if you have a copy of the public key, you won't be able to decrypt the data. There's a couple of different ways to authenticate over SSH. The first is logins with a username and password that map to user accounts on the server, and both the username and password are required. These are easy to use because of automatically generated public and private key pairs. This is handled transparently by your SSH client and the server, so your local machine will have a private key, and the remote server will have a public key. This method is secure, but you'll have to manually specify your password every time. The second way uses a manually generated public/private key pair. This is a little bit more involved, but the end result is that authentication requires a private key to match the public key known to the SSH server. In this case, SSH does not require a password to log into the target system. If the keys do not match, the connection attempt will be rejected. This matching is done on a list on the server called authorized keys which contains a list of all public keys allowed to access a user's account on the server. This is more secure than username/password logins because it requires a user to have an actual key file. I'm going to demonstrate authentication using both methods in a moment. While it may seem a bit extreme in the context of a local development environment, there are a few practical reasons to use SSH to manage local servers. First of all, it's extremely convenient, especially in the context of better clients than the VirtualBox screen which we use to configure Ubuntu. It's not that the VirtualBox screen is bad, it's more like a computer monitor in that it presents information but isn't great for interactions. For example, you can't use your mouse to select and copy text, which is something an SSH client can do. SSH is also compatible with a number of development tools, including integrated development environments like NetBeans. There are also command-line utilities that use SSH like the Drupal shell or WP-CLI, the WordPress command-line interface. Getting in the habit of using SSH is also a best practice for connecting to remote servers. Now, when we connect to the Sandbox via SSH, there are a couple of configurations to keep in mind. The first is the port, which is set to 2222. Earlier, we forwarded the port from 2222 on the host to port 22 on the guest using VirtualBox to prevent potential conflicts. Next is the hostname, which is sandbox.dev. Finally, the username and password will be the same as what was specified during the Ubuntu installation. I'm going to demonstrate using both the Mac and the Windows SSH clients in separate videos, as the techniques are a little bit different. After that point, all the commands will be the same, so there won't be any need to have any operating-system-specific instructions.

 

3.3 Using SSH on a Mac

This video is about using SSH on a Mac. If you're using Windows please skip to the next video for the equivalent instructions or stick around to compare the differences. Let's open the Mac utilities folder now by going to Go, Utilities, and then we're going to double click on Terminal. This is the local commands line interface and in many ways it's similar to the one found on Ubuntu. Type SSH and press ENTER. There are a number of arguments and options available, but no worry we'll do this step-by-step. Let's connect to the server now. Type SSH followed by a space then -P for port, which will specify port 2222 after the port comes a space then the remote systems username. I used JPECK then the @ symbol and then the host name to be connected to sandbox.dev. When complete press ENTER. As this is the first time we're connecting we'll be asked if we trust this host. We do so since we set it up ourselves, so type yes and press ENTER. If everything is setup correctly you'll be prompted for a password. Type it now, it won't be shown, but this is normal and press ENTER. Success, I'm welcomed to the system and I have a command prompt. Now if you're anything like me passwords can be a pain to remember so let's setup a private key so a password isn't required. We're going to gracefully log out of the system by typing logout and pressing ENTER. The next thing we're going to do is create both a public and a private key. This is a one time process, so don't worry about trying to remember every command. Type the command SSH-keygen then space then -T for type space and so we're going to create an RSA key. So RSA followed by - Capital C and then in double quotes you're email address, username@example.com. When you're all set press ENTER. We're prompted to specify a file location, the default will be fine, so press ENTER. The next question is about a passphrase for your private key. This is an interesting question. Do you want to have a password for authentication? The advantage is that it's more secure then just a private key, however, it means you won't have passwordless authentication. Another thing to consider will you be using this key to authenticate with any other server? Use your best judgment. As this is a local development server I am not going to use a password for convenience and rely instead on the security of the host operating system. I'm going to leave the passphrase blank and press ENTER then ENTER again to confirm. The public and private key will be generated and the key generator will display some information about the key. Next we'll need to add the newly created public key to the authorized keys file on the development server. I'm going to get slightly ahead of myself and use commands that I haven't full described, but it's so we don't get ahead of the Windows users. We'll go over the commands in the upcoming videos. I'm going to perform a single remote command on the server. Same as before, SSH space -P 2222 then username @ host, sandbox.dev, however, keep going space then the command MKDIR, for make directory, space - P to create intermediate directories as required then space .SSH, the name of the directory. Press ENTER and then type the password and then press ENTER and the remote directory is created. Now we've got a destination so we're going to copy over the public key so the private key can authorize a connection and decrypt communication. Start with the command CAT, which is short for catalog, which will display the entire contents of a file, then a space then ~, this means your home directory, /.SSH /ID_ RSA.PUB. If we hit enter now it'll show you the contents of the public key. Press up to resume the previous command and the space and then press SHIFT and the key above the RETURN key that looks like a vertical line. This is known as pipe and then another space. We're going to perform a command remotely that used the contents of the public key. SSH space -P 2222 then username, JPECK@ the host name, which is sandbox.dev and another space and single quote cat space and then two greater than symbols. This is going to write to a file, which will be space .SSH /authorized_keys and a final single quote. That's probably the longest command that we're going to have to use in this course. Press ENTER. It'll ask us for a password one final time. Type it and press ENTER. No errors, that's good. We can now log directly into the server without using a password. Type SSH space - P 2222 space and then the username JPECK@ the host, sandbox.dev and then press ENTER. No password was required, fantastic, however, I'm getting pretty tired of typing out - P and so fourth, so let's make a shortcut. Logout again. Remember nano, the editor that we used to edit the host file, well it's back. Type nano space ~/ .SSH /config and press ENTER. We're going to specify some configuration information to be used every time we connect to the server. Type along with me please. For the host space sandbox.dev press ENTER for a new line then TAB then Port with a capital P space 2222. New line tab then User with a capital U space and then the username. JPECK for me. When complete press CONTROL + X to exit then Y + ENTER to save. We can now type SSH space sandbox.dev and press ENTER. That's a lot easier and now we don't have to remember the port. Let's learn more about working with the commands line by skipping the next video that's intended for Windows users.

 

3.4 Using SSH on Windows

This video is about using SSH on Windows. If you're using a Mac, please skip to the next video, or stick around, because it's good to compare and contrast. I have a secret to tell you. The Mac users might be feeling a little bit smug, or at least more so than usual, because they already have an SSH client built into their system. Well, PuTTY is a great SSH client, and there are a lot more steps to get the Mac configured than we'll need for Windows. Start by opening PuTTY. Under Host Name, we're going to specify both a user name and a host name. We'll start with a user name, which is what we specified during the Ubuntu installation. So for me, it's jpeck followed by the at symbol, then the host name, sandbox, dot dev. Press tab to skip over to the port, which we'll specify to what was forwarded, 2222. Finally, for the saved sessions, let's give it a name so we can reuse it, sandbox.dev. Click save, then double click the newly created session. The first time we connect, we'll get a scary-looking message saying that the system is unknown. That's correct. It's new, and this is a one time message. It's okay to dismiss it by clicking yes. We've already set the username, so just specify the password. It won't show anything as you type your password. That's not a bug, it's a feature. Press enter, and success. I'm welcomed to the system, and I have a command prompt. Now, if you're anything like me, passwords can be a pain to remember. So, let's set up a private key so the password isn't required. Go to the start menu, and let's run PuTTYgen. Click generate to generate a public private key pair. We're going to need to move the mouse around in the blank area to generate some randomness. This will take just a moment. When it's done, let's replace the key comment with your email address. Username@example.com. The next question is a pass phrase for your private key. This is an interesting question. Do you want to have a password for authentication? The advantage is that it's more secure than just a private key; however, it means you won't have passwordless authentication. Another thing to consider, will you be using this key to authenticate with any other server? Use your best judgment. As this is a local development server, I am not going to use a password for convenience, and rely on security of the host operating system. I'm going to leave the pass phrase blank. When you're ready, click save private key. I will be warned, am I sure that I want to do this? Yes. Let's specify a place to save it, which is going to be the Desktop, and for the file name, I'm going to use the same username@example, but I wont' do the .com because Windows will interpret that as an executable file. Click save, and the file is created. We'll also need to copy the file into the server so we can use passwordless authentication. So, right click, go to copy, and then switch back to the terminal. We need to create a folder to store the key using the mkdir command. We're going to go over that command in greater detail in the next video. For now, just type M-K-D-I-R space dot S-S-H. To add the contents of the public key, type echo, which will repeat whatever you type, a double quote, then right click to paste the contents. Add a second double quote, a space, then two greater than symbols. This will write to a file which will be space dot SSH for the folder we created, forward slash authorized underscore keys. Press enter, and the authorized keys file will be populated. We're going to gracefully log out of this system by typing logout and pressing enter. To make the private key available to PuTTY, we're going to right click on the generated key and go to Load Into Pageant. No error is displayed, but if we look down in the task manager, the PuTTY authentication agent is running, and if you go to view keys, we can see the created key. Click close, and we can close the PuTTY key generator as well. Let's reload PuTTY, so start, PuTTY, double click sandbox.dev, and finally, we're connected without a password. Let's learn how to use the command's line interface.

NOTLAR
Putty ile giriş yap
Puttygen'i çalıştır
Generate
Key Comment: root@example.com
Save private key
Key metnini seç ve kopyala
Putty'e geç
Home dir'da bir klasör yaratacağız: mkdir .ssh
Dosya oluşturuyoruz: echo "kopyaladığın key'i sağ klik ile yapıştır" >> .ssh/authorized_keys
Kaydettiğimiz Private Key'i Putty ile kullanmak için masaüstündeki dosyaya sağ tıkla Load into Pagent seç

 

3.5 Navigating a command-line shell

Now that we've logged in, let's start navigated within our Linux server using the command's line shell. Over the next couple videos, we'll be seeing a lot of commands. I recommend taking a look at the free quick reference manual that comes with this course. It's also helpful just to read the manual and the easiest way is to visit manpages.ubuntu.com. With the browser and search for a particular command. Well, we've got a prompt and that's about it. Let's take a look around. Right now, all I can see is the command prompt which is pretty short. Let's break down the prompt so we know what we're looking at. It starts with the username that I'm logged in with, which, in my case, is jpeck. This is followed by @ then the name of the server, which is sandbox. There's a colon and a tilde. Tilde, in unix-like systems, means the uuer's home directory. Home directories contain user data, settings and customizations, and so forth. The final character in the prompt, the dollar sign, means that I'm a normal user on the system. There's another variation, a pound or hash, for the root user which is the highest administrative user on the system. Therefore, based on this prompt, I'm logged in as jpeck on sandbox in my home directory. Where is that exactly? To determine where I am currently, I'll use the pwd command which shows the full name of the current working directory. Let's try it now, type pwd and press enter. I see that my home directory tilde is actually located in /home/jpeck. Now that we know where we are, let's see what's in the current directory. The next command that we'll use is ls which lists directory contents. To show the content of the directory, type ls then press enter. Well that's interesting, no files. Didn't we create a directory? Well, by default, ls hides file in directories that start with a dot. To see everything in a directory, type ls space -a. That's better, but not as helpful as we'd like. Let's expand on that command with one more option. L, for long listing format. So we'll do, ls space dash la and press enter. That's much more comprehensive and readable. THere's a lot of information here. I can tell the difference between a file and a directory by looking at the first letter of the line. D means directory and dash means just a regular file. For the time being, let's focus on the right two columns, containing the date created or modified and the file name. There are two interesting entries at the top of this list, . and .. Like tilde, these are special directory names. . is a link to the current directory and .. is a link to the parent directory. This can be useful for commands where you know the relationship of the directory structure but don't know or care about the name. We can also check the contents of different directories, not just the current working directory. To see the contents of a directory, just type ls for list directory contents, - la to see everything, and then the name of the directory. Let's see what's in the .ssh directory. The authorized keys file is the only regular file in here. Now that we know how to get a directory listing, let's actually change our location to a different directory. To change our location, we'll use the cd command, which is short for change directory. The command to change directories is cd, then a space, then the name of the directory where we want to go. Let's move to the parent directory of where we currently are, so .. The prompt has updated to indicate that we're in /home. Show the directory contents again. ls -la Currently, there's only one home directory as there's only one user who can log into the system. Let's change directory to the system's temporary directory which is located in /temp. Then, list the contents, ls -la. There's not a lot to see right now, let's make a new directory here to experiment. The mkdir command, which we use in the previous videos, is short for make directory and it does just that. To use it, just type mkdir space and then the name of the directory that you'd like to create. In this case, hello. No output is given, but no error either. If we look at the directory contents, ls -la, there's a new directory here called hello. Temporary files aren't very interesting so let's go to somewhere more interesting, the log files. We're going to change directory to space /var/log. List the directory contents, ls -la. This is where most of the system's log files are stored, including the apache web server and my sequel database logs. How can we read the contents of these log files?

NOTLAR
manpages.ubuntu.com
~ user home directory
$ normal user
# root user
pwd : current working directory
ls : list directory content, ls -a : all files, ls -la : list mode
. current directory
.. parent directory
cd change directory
mkdir make directory

 

3.6 Reading and searching files

We started by learning how to navigate the file system. Let's take a look at the content of some of these files. The cat Command, which is short for concatenate, sends the contents of a file, to the standard output, like the display. One of the files in the log directory is kern.log, and it contains detailed logs of the messages shown from the Ubuntu Linux kernel, which we saw during boot. Let's display the contents of the kern.log now by typing cat space, then the name of the file, kern.log, and press Enter. That's a lot of information, but that's everything that was seen during boot. What if I don't want to see the entire file? Often, there's no need to see the entire file, only the beginning or the end. With log files, the end is usually the most important part. There are two commands to do this. The first is head, which accesses the beginning of a file, and tail, which accesses the end of the file. The easiest way to remember the commands is to think of a cat, which has a head at the start, and a tail at the end. Therefore, if you just want to see the first part of a file, you'd use the head command. Let's see the start of kern.log. So we'll type "head space kern.log." This shows the first few operations when Linux started up. By default, it shows ten lines. Next, let's take a look at the end of the file. So we'll type tail, for the end, and then kern.log. Similar to head, the last ten lines were shown. Seeing the beginning and the end of the file is good, but what if we want to actually scroll through the entire file? The less command is a simple and fast method of allowing a console user to page through the contents of a file, one screen or line at a time. There are a couple keyboard controls to be aware of. The first is Return, which moves the head forward one line. Next is Space, or f, which will scroll forward one entire page. Conversely, b goes backwards one page. Finally, press q to quit when you're done. Let's take another look at kern.log, but this time, using less. So type "less space kern.log." Press space a couple times to skip pages, then b to go backwards. When you're done, press q to quit. With all these files, it's not practical to remember where everyone is. To assist, the find command searches for files in a particular directory hierarchy, defaulting to the current directory. Any files that match the optional search pattern will be listed with the relative path and filename. Let's experiment with the find command in the current directory, using the special name "dot." So "find space dot." That's effective, but it's overwhelming. How can we filter this type of output? The command grep is used for displaying lines that match a given pattern. Patterns can be just text, like a particular word or phrase, or as complex as regular expressions for programmatic matching. Something to note, grep matching is by default case sensitive, meaning capital A won't match lowercase a. This can be turned off with the dash i option, but it's still something to be aware of. On its own, grep can be very useful, but it can be combined with a pipe command, which is just the vertical bar character. Pipe passes the output from one command to another. You can typically find the pipe key above the return key. Press Shift and the pipe key to make the right character. As an example, let's perform the find command again, but this time pipe the result to grep. So "find space dot space," then "pipe," and then "grep error." This time, we only see log files that have the name "error" in it. Grep can also search a particular file. By default, grep matches case sensitive, so I'll use the dash i option to perform case insensitive searches. For example, if I want to see all the authentication attempts in the authorization log for a particular username, I'll type the username, jpeck, and then the name of the file. Auth.log. This will match all authorization attempts from jpeck. Speaking of authorization, how do we perform administrative commands?

NOTLAR
cat dosyanın içeriğini gösterir
head dosyanın başından 10 satır gösterir
tail dosyanın sonundan 10 satır gösterir
less dosyayı sayfa sayfa görüntüler (return 1 satır ileri, space 1 saya ileri, b 1 sayfa geri, q çıkış)
find dosya bulmak için
grep belli bir kısmı çekmek için | ile önceki komuta eklenir

 

3.7 Administration commands with sudo

Sometimes we'll need to perform administrative tasks that the current user doesn't have permission to do. The sudo command is used to run programs with the security privileges of another user. By default, this user is root. The name sudo is the combination of the su command, which allows you to use the shell of another user, and do, which isn't a command, it just means to take action. Think of sudo like trying to access the backstage of a concert. If you just try to walk in, the bouncer won't let you pass. However, if you use sudo and you say your password, he'll let you in. Let's see sudo in action by trying to access a file that we don't have permission for. There's a special file that contains one way encrypted passwords for all users. First, let's do a directory listing, ls -la, then the full path, which is /etc/shadow. The third column shows the owner of the file, which is root. And the fourth shows the group that the owner is in, which is shadow. My username is not root and shadow is a special system group, of which I am not part of. Let's just try to read the contents of the file using cat. So cat /etc/shadow. Permission denied, as we expected. No matter, let's get the permission we need, using sudo. To get elevated privileges just type sudo, followed by the name of the command to be executed. So we're going to do the same thing, cat /etc/shadow, press Return. We'll be prompted for my password, so type it. Now the contents are shown, because we've used the root user's privileges. Most of the commands in the next chapter will require sudo. The ability to end the session is important. The logout command gracefully closes the connection. Depending on the system configuration, logout can trigger system events, such as a cleanup of a particular directory. Let's log out of the shell. Type logout, and then press enter. The connection is closed, which is a best practice when we're done working. The final command we'll explore is shutdown, which closes the system at a specified time. Pretty straight forward really, but you need the option -h to halt or power off the machine itself. shutdown also requires the time frame, which is most commonly now, which will turn off the machine immediately. Log back in again for one last command, ssh sandbox.dev. Let's shutdown the sever, sudo shutdown -h now. We'll be prompted for the password, so give it and press Enter. The connection is closed and the virtual machine turns itself off gracefully, which includes services like MySQL, which can get corrupted if it was interrupted during an operation. If we look back at the hypervisor, I can see that sandbox is powered off. In this chapter, we've been exploring how to manage the server from the command line. We started with local networking fundamentals and configured the sandbox.dev hostname in the hosts file. Then we described how SSH logins work and demonstrated how to use SSH on both a Mac and Windows. Once we logged in, we navigated the filesystem with commands like ls and cwd, read and searched files with cat and grep, and finally executed administration commands with sudo, to get elevated privileges. In the next chapter we'll start installing and configuring the software we need to run a fully functional development server.

NOTLAR
sudo : su ve do komutlarının birleşimi, su sistemi başka bir kullanıcı gibi kullanmayı sağlar, do bir komut değildir aslında
/etc altında shadow diye bir dosya var. Bu sistemdeki kullanıcılar tek yönlü sifrelenmiş password'lerini içerir
shutdown : sistemi kapatır, shutdown -h now

 

4. Initial Server Configuration

 

5. MySQL Database Administration

 

6. Debugging and Performance

 

7. Installing PHP Applications and Frameworks

 

8. Advanced VirtualBox Techniques

 

9. Troubleshootinh a LAMP Server

 

10. Conclusion