Sunday, August 4, 2019

Product Deployment


Almost every mid-sized office system we've worked on needed some kind of method to clone and deploy specific OS configuration and application sets. PCs requirements were the same, deployment of the same configuration with little or repetitive work. The ideal target being what Microsoft calls “zero-touch” deployments that required no interaction on the target computer whatsoever. This we offered using Microsoft System Center (SCCM) along with the Deployment Toolkit (MDT).

Many shops do not operate that way, and have some level of interaction required during the imaging process. During deployment, the technology deploys and boot up Windows on dissimilar hardware and spare a technicians the task of configuring a new master system for each make of hardware requiring OS deployment.

A system disk image is then deployed easily on the hardware where it was created or to identical hardware. However, if there is a change of a motherboard or use another processor version, the deployed system may be unbootable. An attempt to transfer the system to a new and more powerful computer, will usually produce the same result as the new hardware is incompatible with the most critical drivers included in the image. We use deployment technology that provides an efficient solution for hardware-independent system deployment by adding the crucial hardware abstraction layer (HAL) and mass storage device drivers.

Firstly, our virtual machines provide the option to create hardware-neutral images which can be applied anywhere, regardless of what is actually in the target computer. One image becomes possible for multiple hardware configurations. This also involves less work in maintaining the image as any work only needs to be done once and not x-times per different type of hardware. Secondly, most virtual machine software we use have the ability to save a VM’s state, and revert back to that state, should it become necessary. VMware calls these “snapshots”, and Microsoft uses the term “checkpoint” in Hyper-V. Should a screw-up occur, it can be undone without loosing work or have to re-do everything. These are two facets that are simply not available with building images on real hardware. Test on real hardware, but build in a virtual environment.

The build VM workstation has to have some power to it. Nothing extravagant but it must be above average. We avoid using a laptop as a VM build station. Laptops are great for testing, but a desktop PC is optimal. A quad-core CPU (Intel Core i5/i7, or AMD Phenom series) and the more powerful, the better. RAM is the key. The more the better. We recommend working with 16GB of RAM on the workstation, and it can handle three running VMs on the host OS - VMs take up storage space quickly. Working on several VMs, it is not difficult to fill a 2TB. The VM server’s host OS should be as lightweight as possible. It needs to host a hypervisor and not much else. The more software we add to the host, the more packages we need to keep up to date to have a stable server.

(c) 2018, MEP Digital Systems (Pty) Ltd.

No comments:

Post a Comment

Content Analytic Platforms

One of the huge upsides in the digital distribution economy is access to data. Content creators have more tools for tracking their content...