Wednesday, September 11, 2019

Optimizing Digital Transformation


Digital transformation is a catchall term that includes a wide variety of goals. For some companies, digital transformation means leveraging cloud capabilities. For others, artificial intelligence and machine learning are more important. Increased analytics capabilities and Internet of Things implementations play roles as well. The point of digital transformation is not to take companies from paper processes to hyper-advanced and complex digitized alternatives. On the contrary, many components of digital transformation are subtle. Even the most technologically challenged business owners, given a bit of time, can lead an effective digital transformation with the right mindset. Company goals and challenges dictate the type of digital transformation to pursue. A company that seeks more customers, for instance, might consider how to implement new tech to improve the customer experience. An online chatbot or a user-friendly app might do the trick, or perhaps a business’s first step is to create store listings on Google and Apple Maps.

A company struggling to optimize its partner network, on the other hand, might invest in more analytics tools to make better choices about shipping and vendors. It doesn’t matter whether the company sells software or gravel: Digital transformation is for everyone. Businesses considering digital transformation should ask themselves where their threats and opportunities lie. Have comparable industry players already made moves toward digital transformation? Have customers begun to demand options that the company cannot provide today? Rather than wait for market influences to force their hands, though, businesses should proactively pursue digital transformation to stay ahead of the game. Most employees are set in their ways: In order to make the change go as smoothly as possible, business owners need to talk to employees about why the company is moving in a digital direction. Some employees worry that digital transformation will eventually cost them their jobs. Cents saved are cents earned, even for companies with growth goals. Before taking on new debt to tackle grand opportunities, consider how digital transformation could make existing processes cheaper and easier. Mosy small companies can’t afford to sink money into suboptimal campaigns the way big ones can. With a predictive analytics tool that integrates with existing tech, small businesses can stretch their advertising budget much further.

Most Bigger companies can afford to staff robust IT departments, but smaller companies don’t have that kind of cash. By outsourcing, small companies can maximize their resource pools and only pay for the aspects of digital transformation with the greatest impact. A good technology partner will recommend in-budget solutions that are compatible with a business’s infrastructure. Ask potential partners whether they have experience with small businesses, who at the vendor will be responsible for communications and what kind of preparations the business should make ahead of time. Digital transformation isn’t some vague buzzword. For small businesses, digital transformation includes all the little steps forward that add up to massive cost savings, happier customers and improved prospects. Look for areas with easy opportunities for big improvements, and then use these tips to turn those potential gains into reality. Fortunately, digital transformation doesn’t require businesses to move every process to the cloud, nor does it require them to invest millions of Rands in brand-new tech -- a critical note, The most effective digital transformations leverage existing infrastructure and find maximum value in the most sensible improvements.

c. 2018, MEP Digital Systems (Pty) Ltd.

Wednesday, August 28, 2019

Service Subscription Business Model


Subscription business models are taking hold in many different industries, supported by advances in cloud computing, data analytics, and the Internet of Things (IoT). Suppliers benefit from longer and wider revenue streams for their products, while customers avoid big upfront payouts and are able to standardize and spread out their expenses. Subscription-based services certainly aren’t new. But the model that’s evolved in the last several years is more robust and widely applicable than any previous one. It has radically overhauled at least one major industry. Subscription cloud computing services, with their flexibility, cost savings, and performance advantages, are increasingly replacing on-premises software as the technology platform of choice for enterprise IT. The as-a-service model is catching on quickly in other industries, including construction, where new, sophisticated machinery is extremely expensive and aging equipment is a competitive disadvantage. One major manufacturer of construction equipment is expanding its digital services for high-end products to include real-time monitoring of parts and performance to identify weaknesses and wear.

In healthcare, IoT-enabled devices already relay patients’ health data immediately and consistently to hospitals and medical practices, implemented as part of a remote monitoring service. In the auto industry, Volvo’s recent advertising campaign, extols the manufacturer’s new subscription program, under which its monthly vehicle charges include taxes, insurance, and both roadside and at-home services. This program eliminates the down payment and end-of-service fees typical of auto leases. Other auto manufacturers and dealers offer similar programs. Subscription services are popping up in unexpected places. In anticipation of the impending shift to 5G wireless networks, a major network device manufacturer is offering consumers 5G router hardware, software, and support bundled as a “premium service.” A European utility is offering consumers a service that, for a monthly fee, monitors temperature and functioning of a customer’s hot water tank through an attached IoT device. Temperature loss results in a text message and/or phone call to the customer, expediting a visit from a technician already armed with relevant data.

The as-a-service model appeals to millennials, according to recent reports, many of whom value convenience over ownership. Those strapped by college debt also appreciate the ability to spread out their costs—for everything from clothes to furniture. Still, the consumer-oriented subscription market isn’t without its challenges. For example, recent efforts to offer IoT-enabled, app-connected electric scooters as “shared services” have no doubt created a ready-made market of urban commuters looking for mobility alternatives, but issues related to traffic rules, safety regulations, and liability are holding back the services, while a glut of competitors is complicating the economics. All of this adds up to a typical day in e-commerce land. Which of these new online as-a-service businesses will succeed over the long haul, and in what form, isn’t exactly clear. What is clear is that the current “let’s see what sticks” business environment requires savvy companies to overcome any lingering trepidation about the model cannibalizing their current revenue streams and explore where “as-a-service” might fit into their product plans. In addressing such an open-ended imperative, cloud computing can help. MEP Digital Systems, for instance, offers field service management for subscription tailored specifically to the as-a-service model.

Cybersecurity Awareness Campaign!





Dear Future Victim,

PLEASE PANIC!

Cower in the corner under a toilet paper fort with a pile of ammo for a pillow. Meanwhile, I’m hacking your corporate network. Work from home, they said. Self-isolate, they said. Avoid contagion, they said. They forgot about me, for I am a DORMANT CYBER PATHOGEN. Dormant no longer While you’re avoiding biological infection, I am quietly spreading my digital contagion throughout your organization, ready to flip the switch at just the right moment: RANSOM TIME!

God, I love the smell of ransomware in the morning. Nothing like the sweet, sweet aroma of bitcoin in the aftermath of a little bit of racketeering. A racketeering cyber pathogen–that’s me! Mixing metaphors like bleach and sulfuric acid, but it don’t matter cause at the end of the day it’s BLING BLING, CHING CHING TIME, when I count up my illicit Bitcoin gains and then fill a vast silo with the same number of gold coins so I can swim in my loot like Scrooge McDuck. (Don’t tell me you never wondered what that would be like.)

How did I rise to my current eminence, sitting Smaug-like on a load of loot? Simple. I waited for you to make mistakes. Errors because the boss said, “Just make it work!” You had 24 hours to set up work-from-home for an army of cubicle natives, unaccustomed to sweat shops hours of pajama productivity. Some of my fave mistakes you make are also the easiest for you to fix. No wonder I’M KING OF THE WORLD!

DON’T DEPLOY A VPN. 

Force your employees to directly connect to the tons of new internet-facing services you just put online cuz your boss said productivity is the number one priority. We’ll see how much he likes productivity when I take his entire network for ransom. Go ahead. Punch a hole through the corporate firewall and give RDP access to a bunch of employees–and to the entire internet!

Just to make sure I was doing this crime thing correctly, I caught up with Johnny Xmas, obviously not his real name (duh!), a senior researcher for the cybersecurity R&D firm GRIMM. He told me I was totally on the right track. The number of remote desktop servers (RDP) appearing on the internet as a whole is increasing substantially on the whole day by day. Do they all have MFA on them? Probably not. Why are we directly exposing them to the internet? Employees should VPN into the corporate network and then RDP into the machine. Trust me, I’m gonna love that unpatched Windows XP box covered in dust the IT department just gave the whole world access to. In fact, my only real problem will be keeping other attackers from partying with me–that’s my box! Bad APT! Bad APT! Take your advanced persistency and go threaten someone else!

I get very territorial when doing crimes. It’s a question of ethics. Only one racketeering play at a time. This Windows box ain’t big enough for the both of us. DRAW, STRANGER! But I digress. Ever since escaping WestWorld things have been a bit strange. (Oops, now you know my secret, you won’t dob me in, will you? Please, guvnuh, can I have some more?) Oh look, an employee working from a personal device!

USE PERSONAL DEVICES.

I loves it when you do this. Access confidential business information from the unsecured personal laptop full of third-party softwaremalware! So easy to pop. SNAP CRACKLE POP, I’M RANSOMWARE! TAA-DAHHH! So when I send you my handcrafted, artisanal phishing emails linking to websites such as my freshly registered Covid19MedicalAdvice.com with an urgent subject line “Employee Health & Safety” from a spoofed email pretending to be the CEO, my RAT will out-CAT your consumer-grade anti-virus.

There’s never been a better time to go phishing. “When people get scared, they may not be as focused as they need to be, looking at these links and email addresses,” I once heard NetScout CSO Debby Briggs say. “If I’m the person trying to break in, I’m going after email, and I’m going to create fake websites.” When your panic-addled brain sends an electrical impulse to your mouse-clicking fingers, then my malware will be coming down your fiber optic like a giant uncovered digital sneeze. Here’s hoping COVID-19 doesn’t jump the meatspace-digital barrier and start infecting computers, I may be a dormant cyber pathogen awakening from my slumber, but compared to COVID-19, I’m an infectious amateur.

You’re l33t, bro. Yeah. I’m talking to you. You with the classy hacker handle: “COVID-19.” You may still be a teenager but mad respect for your skillz. Let’s get a little bro-mance going on here, between two infectious geeks. I know they say we can never be together, you a biological agent of doom, me a digital agent of doom, but look at how much we have in common: WE ARE BOTH AGENTS OF DOOM! Think about it, bro boo. You call me. Yeah, I’m making that thumb and little finger gesture that looks nothing like a phone. I never thought I could fall for a virus I didn’t create myself, but that’s love for you, I guess.

NO 2FA? NO PROBLEM!

For the love of my ill-gotten plunder, do not, I repeat, DO NOT enroll your employees in any kind of two-factor authentication program. Nothing bursts my bubble as a digital agent of doom than having to end-run around properly configured 2FA, especially you awful, horrible people who use U2F tokens like Yubikeys. See that cartoon steam pouring horizontally from both of my ears? That’s how I feel about 2FA, YOU WASCALLY WABBIT, YOU! What? I’M the wascally wabbit? Wow. Looking in a mirror hurts.

FORGET EMPLOYEE TRAINING

Embrace your cynicism and repeat after me: “If education is the solution to your security problem, you’ve already lost.” Everything is lost! Give in to panic! Give in to hopelessness! What’s the point of living? Just accept my malware into your life, like the Gospel of badness it is! Because in a pinch training can be quite effective, and we do NOT want any of that happening, now do we, my dears? “It’s not possible in a two-week period, much less 24 hours, to roll out a full MDM [Mobile Device Management] solution to enforce and monitor policies,” Xmas says. “So, it’s important to get the verbal policy out there, to train work staff on secure practices.” 

“People won’t listen all the time when dealing with this emergency,” he adds, “but security is never all or nothing. We do what we can when we can and work towards building up to a perfect solution in the end.” Remember, folks: The good is the enemy of the perfect. Strive for the impossible! Strive for true innovation! Meanwhile, I’ll be holding your network for ransom. Now you’ll excuse me, I have a silo of gold coins to go swim in.

Sunday, August 25, 2019

Information Logistics


All organisations and companies are dependent on information. It can be found within IT systems, in binders and computer files on individual computers or in the management information systems which is sophisticated and powerful science in existence. Information logistics is the branch of computer science and logistics which is concerned with implementing systems that can perform many complex tasks using advanced mathematics and highly sophisticated algorithms. The discipline can be viewed as the bedrock of route planning software in the global distribution business. In this sphere, the industry is concerned with optimising the flow of goods and services and all relevant information between an origin and destination point.

An Information Element (IE) is an information component that is located in the organizational value chain. The combination of certain IEs leads to an information product (IP), which is any final product in the form of information that a person needs to have. When a higher number of different IEs are required, it often results in more planning problems in capacity and inherently leads to a non-delivery of the IP. Data Logistics is a concept that developed independently of Information Logistics in the 1990s, in response to the explosion of Internet content and traffic due to the invention of the World Wide Web (WWW). The growth in the volume of Web hits, combined with the steady increase in the size of Web-delivered objects such as images, audio and video clips resulted in the localized overloading of the bandwidth and processing resources of the local and/or wide area network and/or the Web server infrastructure. The resulting Internet bottleneck can cause Web clients to experience poor performance or complete denial of access to servers that host high volume sites (the so-called Slashdot effect).

The goal of Information Logistics is to deliver the right product, consisting of the right information element, in the right format, at the right place at the right time for the right people at the right price and all of this is customer demand driven. If this goal is to be achieved, knowledge workers are best equipped with information for the task at hand for improved interaction with its customers and machines are enabled to respond automatically to meaningful information. Methods for achieving the goal are:
- the analysis of information demand
- intelligent information storage
- the optimization of the flow of information
- securing technical and organizational flexibility
- integrated information and billing solutions

The supply of a product is part of the discipline Logistics. The purpose of this discipline is described as follows:

Logistics is the teachings of the plans and the effective and efficient run of supply. The contemporary logistics focuses on the organization, planning, control and implementation of the flow of goods, money, information and flow of people. Information Logistics focusses on information. Information (from Latin informare: "shape, shapes, instruct") means in a general sense everything that adds knowledge and thus reduce ignorance or lack of precision. In stricter sense information becomes information only to those who can interpret it. Interpreting information will provide knowledge.

It entails the organisation and prioritisation of tasks by using applications such as resource planning software and associated algorithms within the overall supply and distribution infrastructure. In such contexts, route optimisation is a keyword as there is almost certainly never going to a perfect set of delivery circumstances. These constraints to perfection in the real world are exemplified by such things as diversions or adverse operational conditions and are often updated in real time. Information logistics system such as a multi-use resource planner, the constraints are expressed as a series of mathematical equations and functions which take the form of an inequality. In advanced mathematics, an inequality occurs when two values in a relationship are not equal. This may sound obvious, but it must be made clear that the term inequality does not mean that one value is greater or that they can be compared, it merely means that value X does not equal value Y. In the most simplistic terms when an inequality occurs a further constraint is placed on route optimisation, and that means the delivery schedule may be affected, at the very least the schedule must be flexible enough to accommodate the change. The presence of an inequality means that new information must be input to the computer network and a new set of options displayed.

The Digital Evolution in Technology


JAMstack is revolutionising the way we think about workflow by providing a simpler developer experience, better performance, lower cost and greater scalability. JAM stands for JavaScript, API & Markup. A modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup. Dynamic functionalities are handled by JavaScript. There is no restriction on which framework or library you must use. Server side operations are abstracted into reusable APIs and accessed over HTTPS with JavaScript. These can be third party services or your custom function. Websites are served as static HTML files. These can be generated from source files, such as Markdown, using a Static Site Generator. JAMstack websites don't have to be static. There are great services available to help bring some dynamic data to your product. You can also abstract your own functions into reusable APIs. For this you can use AWS lambda functions or Netlify Functions. Many JAMstack products have dynamic comment sections. These are typically used on blogs and a great way to interact with your audience. Setting up an online store on the JAMstack has never been easier.....

Lets also look at single-page application (SPA) which is a web application or web site that interacts with the user by dynamically rewriting the current page rather than loading entire new pages from a server. This approach avoids interruption of the user experience between successive pages, making the application behave more like a desktop application. In a SPA, either all necessary code – HTML, JavaScript, and CSS – is retrieved with a single page load, or the appropriate resources are dynamically loaded and added to the page as necessary, usually in response to user actions. The page does not reload at any point in the process, nor does control transfer to another page, although the location hash or the HTML5 History API can be used to provide the perception and navigability of separate logical pages in the application. Interaction with the single page application often involves dynamic communication with the web server behind the scenes.

Let’s say your next project is going to be a simple HTML website for a resumé, marketing a product or service, documenting your software, or something along those lines. A great option for you is to build your website using static site generators (SSG). There are tons of static site generators in a range of programming languages, such as JavaScript, Ruby, Go — the list goes on. A common CMS (Content Management System), like WordPress for instance, builds the web page dynamically as it is being requested by the client: it assembles all the data from the database, and processes the content through a template engine. With all the options available, it’s easy to feel paralyzed when it comes to choosing a static site generator that fits the bill. There are some considerations that could help you sieve through what’s on offer. Your project’s requirements should throw some light on the features you should be looking for in your SSG. If your project needs lots of dynamic capabilities out of the box, then Hugo and Gatsby could be a good choice. As for build and deploy time, all of the SSGs listed above perform very well, although Hugo seems to be the favorite, especially if your website has a lot of content. Is your project a blog or a personal website? In this case, Hugo and Gatsby could be excellent choices, while for a simple documentation website VuePress would be a great fit. If you’re planning an e-commerce website, then you might want to consider which SSG fits in well with a headless CMS for store management. In this case, Gatsby and Nuxt could work pretty well. One more thing you might want to consider is your familiarity with each of the SSG languages. If you program in Go, then Hugo is most likely your preferred choice. As for the remaining options, they’re either built on top of React (Next and Gatsby) or Vue (Nuxt and VuePress).


Now, headless content management system, or headless CMS, is a back-end only content management system (CMS) built from the ground up as a content repository that makes content accessible via a RESTfulAPI for display on any device. The term “headless” comes from the concept of chopping the “head” (the front end, i.e. the website) off the “body” (the back end, i.e. the content repository). Next is serverless architecture (also known as serverless computing or function as a service, FaaS) is a software design pattern where applications are hosted by a third-party service, eliminating the need for server software and hardware management by the developer. Applications are broken up into individual functions that can be invoked and scaled individually. Next is serverless architecture (also known as serverless computing or function as a service, FaaS) which is a software design pattern where applications are hosted by a third-party service, eliminating the need for server software and hardware management by the developer. Applications are broken up into individual functions that can be invoked and scaled individually. Hosting a software application on the internet usually involves managing some kind of server infrastructure. Typically this means a virtual or physical server that needs to be managed, as well as the operating system and other web server hosting processes required for your application to run. Using a virtual server from a cloud provider such as Amazon or Microsoft does mean the elimination of the physical hardware concerns, but still requires some level of management of the operating system and the web server software processes.

With a serverless architecture, you focus purely on the individual functions in your application code. Services such as Twilio Functions, AWS Lambda and Microsoft Azure Functions take care of all the physical hardware, virtual machine operating system, and web server software management. You only need to worry about your code. PaaS, or Platform as a Service, products such as Heroku, Azure Web Apps and AWS Elastic Beanstalk offer many of the same benefits as Serverless (sometimes called Function as a Service or FaaS). They do eliminate the need for management of server hardware and software. The primary difference is in the way you compose and deploy your application, and therefore the scalability of your application. With PaaS, your application is deployed as a single unit and is developed in the traditional way using some kind of web framework like ASP.NET, Flask, Ruby on Rails, Java Servlets, etc. Scaling is only done at the entire application level. You can decide to run multiple instances of your application to handle additional load. With FaaS, you compose your application into individual, autonomous functions. Each function is hosted by the FaaS provider and can be scaled automatically as function call frequency increases or decreases. This turns out to be a very cost effective way of paying for compute resources. You only pay for the times that your functions get called, rather than paying to have your application always on and waiting for requests on so many different instances.

You should especially consider using a serverless provider if you have a small number of functions that you need hosted. If your application is more complex, a serverless architecture can still be beneficial, but you will need to architect your application very differently. This may not be feasible if you have an existing application. It may make more sense to migrate small pieces of the application into serverless functions over time.

c. 2019, MEP Digital Systems (Pty) Ltd.

Saturday, August 24, 2019

Document Management


An electronic document management system (EDMS) is a software system for organizing and storing different kinds of documents. This type of system is a more particular kind of document management system, a more general type of storage system that helps users to organize and store paper or digital documents. EDMS refers more specifically to a software system that handles digital documents, rather than paper documents, although in some instances, these systems may also handle digital scanned versions of original paper documents. An electronic document management provides a way to centrally store a large volume of digital documents. Many of these systems also include features for efficient document retrieval.

Some experts point out that the electronic document management system has a lot in common with a content management system (CMS). One major difference, though, is that most CMS systems involve handling a variety of Web content from a central site, while a document management system is often primarily used for archiving.

In order to provide good classification for digital documents, many electronic document management systems rely on a detailed process for document storage, including certain elements called metadata. The metadata around a document will provide easy access to key details that will help those who are searching archives to find what they need, whether by chronology, topic, keywords or other associative strategies. In many cases, the specific documentation for original storage protocols is a major part of what makes an electronic document management system so valuable to a business or organization.

Typically, document management refers to a centralized software system that captures and manages both digital files and images of scanned paper documents. Electronic document management systems share many similar features with enterprise content management (ECM) systems; however, document management software systems focus on the use and optimization of active documents and structured data, such as Word documents, PDF files, Excel spreadsheets, PowerPoint, emails and other defined formats, whereas ECM systems also manage unstructured content and rich media formats.

However, electronic document management is much more than simply scanning and saving: it is a comprehensive system that enables knowledge workers to efficiently organize and distribute documents across the organization for better, integrated use within daily operations. Electronic document management systems contain tools for:
- Creating digital files and converting paper documents into digital assets.
- Easily sharing digital documents with the right knowledge workers.
- Centrally organizing documents in standardized file structures and formats.
- Storing and accessing information for more efficient use.
- Securing documents according to standardized compliance rules.

By centralizing information use and access, document management is the hub on which broader information management strategies like ECM, records management and business process automation can be connected and deployed. Organizations typically start using electronic document management systems to transform paper-based operations after reaching an internal tipping point in which customer response times become too slow, departments don’t have enough bandwidth to solve recurring process bottlenecks, paper archiving becomes too costly or large-scale regulatory risks are exposed during a data breach or compliance fines.

For organizations that have defined but resource-intensive business processes, EDMS is an ideal fit. Document management helps organizations across industries side step this busy work entirely by eliminating manual document maintenance, reclaiming valuable staff time and boosting the bottom-line. Document management projects are often initiated by IT departments as means to standardize information access when data practices vary greatly between departments. If one department is using its own imaging system and another team keeps files in uncontrolled shared drives, personal file folders or cloud storage solutions, IT leaders will look for a comprehensive EDMS system to provide one corporate standard for modernizing and managing document use while digitizing existing business rules for each department.

Specifically, department managers that oversee back-office processes like HR management, accounting and contract management (or other repetitive processes with defined steps) also frequently catalyze the move towards an EDMS system. Starting a small-scale document management project for a specific process pain point helps demonstrate initial gains that can be applied across other areas of the business. Modern EDMS systems can offer a hybrid of on-premise and cloud-based solutions in order to cover all the bases of data extraction and use in core business processes. Process automation not only saves time but positions EDMS as a necessary tool for improving customer and client satisfaction.

Tuesday, August 20, 2019

Commercial Digital Displays


As a digital signage solution provider, MEP Digital Systems consultants get asked this question constantly, by all our potential clients and will go through some of the most compelling reasons to use commercial screens or Large Format Displays (LFD) versus residential screens for in store digital signage. When you look at residential screens or screens sold at local electronics retailer, all of them have been designed to work in homes and are not designed to be installed in a QSR – Quick Service Restaurant, where there is excess heat and oil in the air or at a airport where they have to run for 24 hours a day. All LFD’s are designed with enclosures that are built to handle the stresses of the environments we build them into, with additional cooling vents, toughened front panels and covers that can handle corrosion and dust. Additionally the long lasting displays are designed with thinner bezels to maximise the viewing area, which is key to digital signage.

Large Format Displays do not only have the basic input ports which are found on normal residential screens, which is used to connect your Blu-Ray, DSTV and Surround Sound System at home, but have Serial ports to monitor the LFD so that we can centrally detect if the display is on or off, manage the LFD’s temperature, etc. Additional ports such as Ethernet, RS232 and additional USB ports, further enhance the capability to network and communicate with the display. Electrical power unit’s onboard commercial displays are also more industrial to manage the variation in electricity feed to the display. In retail environments for instance, there are many more electrical requirements than in a residential environment, this will lead to power outages. A residential screen will not automatically switch on after an outage, as a commercial screen will and is also able to automatically switch on and off based on a schedule.

The individual components of LFD’s are all designed and built from heavy-duty parts, which ensures that commercial screens are up to the task of working constantly for 12 – 24 Hrs per day in harsh environments. This is in contrast to the family room or lounge where residential screens work for up to 8 hrs a day. LFD’s are also built to work perfectly on both landscape and portrait mode, which will normally cause ghosting of an image on a residential screen if positioned in portrait mode. Over the years most reputable residential screen suppliers have been offering a basic warranty, but with Samsung our LFD supplier, you get a three year warranty that provides you the client with peace of mind over the period. Large Format Displays have always been perceived to be more expensive but they are designed for digital signage and are robust enough to deliver on your 24/7/365 day business requirement.


Saturday, August 17, 2019

Blockchain For Supply Chains


Leaders from the global supply chain and logistics industry, the world’s largest ports, blockchain start-ups, importers/exporters and civil society have partnered with more than 20 governments, to accelerate the use of blockchain responsibly and strategically across supply chains. The Blockchain in the Supply Chain project is a new initiative to help supply chain decision-makers cut through blockchain hype to ensure the emerging technology is utilized in a secure, responsible and inclusive way that benefits all stakeholders. Globally, the supply chain industry is fragmented – with many parties operating in silos. Blockchain presents a technology promise that would have far-reaching implications for global trade and supply chains – bringing standardization, alignment and transparency. But, the technology is prone to hype.

A blockchain is a decentralized, distributed and public digital ledger that is used to record transactions across many computers so that any involved record cannot be altered retroactively, without the alteration of all subsequent blocks. This allows the participants to verify and audit transactions independently and relatively inexpensively. A blockchain database is managed autonomously using a peer-to-peer network and a distributed timestamping server. They are authenticated by mass collaboration powered by collective self-interests. Such a design facilitates robust workflow where participants' uncertainty regarding data security is marginal. The use of a blockchain removes the characteristic of infinite reproducibility from a digital asset. It confirms that each unit of value was transferred only once, solving the long-standing problem of double spending. A blockchain has been described as a value-exchange protocol. A blockchain can maintain title rights because, when properly set up to detail the exchange agreement, it provides a record that compels offer and acceptance. Blocks hold batches of valid transactions that are hashed and encoded into a Merkle tree. Each block includes the cryptographic hash of the prior block in the blockchain, linking the two. The linked blocks form a chain. This iterative process confirms the integrity of the previous block, all the way back to the original genesis block.

Sometimes separate blocks can be produced concurrently, creating a temporary fork. In addition to a secure hash-based history, any blockchain has a specified algorithm for scoring different versions of the history so that one with a higher score can be selected over others. Blocks not selected for inclusion in the chain are called orphan blocks. Peers supporting the database have different versions of the history from time to time. They keep only the highest-scoring version of the database known to them. Whenever a peer receives a higher-scoring version (usually the old version with a single new block added) they extend or overwrite their own database and retransmit the improvement to their peers. There is never an absolute guarantee that any particular entry will remain in the best version of the history forever. Blockchains are typically built to add the score of new blocks onto old blocks and are given incentives to extend with new blocks rather than overwrite old blocks. Therefore, the probability of an entry becoming superseded decreases exponentially as more blocks are built on top of it, eventually becoming very low. There are a number of methods that can be used to demonstrate a sufficient level of computation. Within a blockchain the computation is carried out redundantly rather than in the traditional segregated and parallel manner.

Blockchain has the potential to revolutionize a range of sectors where trust is needed among parties with misaligned interests. But it is precisely within these contexts that deploying such a new and complex technology can be the most difficult. Providing increased efficiency, transparency and interoperability across supply chains has been one of the most fertile areas for blockchain experimentation, illustrating both the opportunities and challenges in realizing the transformative potential of this technology. Over 100 organizations and experts have joined the project to co-design an open-source toolkit which will streamline the deployment of blockchain throughout a broad and diverse sector. A multistakeholder community, representing large shippers, supply chain providers and governments, will design governance frameworks to accelerate the most impactful uses of blockchain in port systems – in a way that is strategic, forward-thinking, and globally interoperable; and by which countries across the economic spectrum will be able to benefit. This team will release white papers each month focusing on the findings from the project community. The recommendations will include guidelines on data privacy, security, creation and use of data, public versus private platforms, interoperability, digital identity and signatures. Supporting an approach that considers the entire ecosystem promises to ensure an inclusive perspective and result that will benefit all stakeholders.

The World Economic Forum’s project, Redesigning Trust: Blockchain for Supply Chains, is the work of the Centre for the Fourth Industrial Revolution – which brings together governments, leading companies, civil society and experts from around the world to co-design and pilot innovative approaches to the policy and governance of technology. An issue in this ongoing debate is whether a private system with verifiers tasked and authorized (permissioned) by a central authority should be considered a blockchain. Proponents of permissioned or private chains argue that the term "blockchain" may be applied to any data structure that batches data into time-stamped blocks. These blockchains serve as a distributed version of multiversion concurrency control (MVCC) in databases. Just as MVCC prevents two transactions from concurrently modifying a single object in a database, blockchains prevent two transactions from spending the same single output in a blockchain. Opponents say that permissioned systems resemble traditional corporate databases, not supporting decentralized data verification, and that such systems are not hardened against operator tampering and revision. A blockchain, if it is public, provides anyone who wants access to observe and analyse the chain data, given one has the know-how. We can join in the effort to streamline new and complex technologies like blockchain, helping to revolutionize sectors and ecosystems and build trust globally.


Wednesday, August 7, 2019

Tag Management System

tag management extends chart

Tags are small snippets of code that add new features or functionalities to your site, like web analytics tracking, remarketing or conversion tracking, optimization and testing services, and many other marketing technologies. A tag, sometimes called a pixel, is a piece of JavaScript code that most vendors require users to integrate into their web and mobile sites to perform a task such as advertising, live chat, and product recommendations. In addition to supporting your digital marketing efforts, these ‘tags’ collect unique visitor behaviour information. A tag management system (TMS) makes it simple for users to implement, manage, and maintain tags on their digital properties with an easy to use web interface. Using a TMS is integral to providing a foundation for your organization’s data collection and governance needs while helping to drive better customer experiences. Tag management is not the most exciting name for a crucial technology, often being confused with blog tags, tag clouds or search engine meta tags. Tag management is not related to any of those. Tags are a means to collect and move data between a website or mobile app session, and the technology vendor. Nevertheless, that is how the industry evolved, and the name stuck, although it is quickly becoming part of a larger data conversation. Tag management has moved rapidly to help manage tags and data outside of the traditional website. Companies are using tag management to control and manage their customer data and vendors across web, mobile, IoT, and connected devices. One thing to note about mobile apps is that instead of using a master tag from the tag management provider, they leverage a library that serves the same essential purpose. Once the library is added to the mobile app, marketers and mobile developers can add analytics and other solutions without having to recertify the app in the mobile app marketplaces. Tag management systems control the deployment of all other tags and mobile vendor deployments via an intuitive web interface, without requiring any software coding. Tag management systems make it easy to add, edit or remove any tag with point and click simplicity. Enterprise tag management solutions – compared to free tag managers – also deliver a variety of advanced capabilities, such as customization, data management, privacy controls, mobile application support and much more. Because of its strategic position in the data supply chain, tag management systems have rapidly evolved to become a key part of the foundation for data management and collection. A tag management system collects first party data. First party data is considered the most powerful and relevant data because it is the behaviors and interactions you compile on visitors. This robust first party data can be used to create unified customer profiles, driving more timely and relevant omnichannel experiences while fueling business intelligence initiatives and streamlining data warehouse projects.

Monday, August 5, 2019

Enterprise Mobility Management


Enterprise mobility management (EMM) is the set of people, processes and technology focused on managing mobile devices, wireless networks, and other mobile computing services in a business context. As more workers have bought smartphone and tablet computing devices and have sought support for using these devices in the workplace, EMM has become increasingly significant. The goal of EMM is to determine if and how available mobile IT should be integrated with work processes and objectives, and how to support workers when they are using these devices in the workplace. Because mobile devices are easily lost or stolen, data on those devices is vulnerable. Enterprise mobility management is a set of systems intended to prevent unauthorized access to enterprise applications and/or corporate data on mobile devices. These can include password protection, encryption and/or remote wipe technology, which allows an administrator to delete all data from a misplaced device. With many systems, security policies can be centrally managed and enforced. Such device management systems are programmed to support and cooperate with the application programming interfaces (APIs) from various device makers to increase security compliance. EMM grew out of mobile device management (MDM), which focused solely on device-level control and security. After Microsoft's 2015 release of Windows 10, most EMM software providers expanded into unified endpoint management (UEM), which allows IT to manage PCs and mobile devices through a single console. EMM typically involves some combination of MDM, mobile application management (MAM), mobile content management (MCM) and identity and access management. These four technologies started off as individual products, but they are increasingly available through larger EMM software suites.

MDM is the foundation of any enterprise mobility suite. It relies on the combination of an agent app, which is installed on an endpoint device, and server software running in the corporate data center or in the cloud. Administrators use the MDM server's management console to set policies and configure settings, and the agent enforces these policies and configures these settings by integrating with APIs built into mobile operating systems. MAM provides more granular management and security. It allows admins to set policies for a specific app or subset of apps, rather than for the whole device. Some apps have specific MAM APIs built in, while others rely on the device-level MAM APIs in most major mobile operating systems. With MCM, only approved applications may access or transmit corporate data. And identity and access management controls how, when and where workers may use corporate apps and data, while also offering some user-friendly features, such as single sign-on. These technologies all address specific concerns, and the overlap between MDM, MAM and MCM is quite minimal. As more organizations embraced enterprise mobility, vendors started to productize EMM, usually by adding MAM or MCM features to their MDM products. An enterprise app store or other self-service portal for application delivery and deployment is also a common component of EMM software.

Microsoft built MDM APIs into Windows 10, which opened the door for EMM software to manage PCs in the same way it manages smartphones and tablets. Apple also allows its macOS desktops and laptops to be managed in this fashion. All major EMM vendors support this functionality, marking a market shift from EMM to UEM. In 2017, Gartner named four leaders -- VMware, MobileIron, IBM and BlackBerry -- in its Magic Quadrant, which ranks vendors according to their completeness of vision and ability to execute on that vision. IDC, which ranks vendors based on their capabilities and strategies, named VMware, MobileIron, BlackBerry, IBM and Citrix as leaders. Enterprise mobility management (EMM) is a growing force as the modern workforce becomes increasingly dependent on personal and enterprise mobile devices. MEP Digital Systems provides a solid EMM platform, as well as show you how to build a solid business case for EMM. We assess and evaluate the key features a solid EMM platform should possess and how to compare the leading options on the market for you. Lastly, we are certified associates of the leading EMM providers on the market.

Sunday, August 4, 2019

Workflow Automation



Automation in and of itself can raise productivity...... However, we acknowledge that automation cannot solve everything. We found through experience that what may be hindering a businesses is often the processes that is followed, whether it is assembling a product or delivering a service. The solution is optimizing the workplace for efficiency and productivity, and using automation as one of the multiple tools at your disposal to achieve set objectives......One of the methods we use during our consultation to improve workflow efficiency in organizations is to list all of the processes and workflows in the business that is followed on a daily basis. We list each process, the purpose, and everyone involved in the process. As the customer is working, we start to list the periodic processes people follow such as weekly and monthly activities. At the end of a month or two, we’ll have a list of every major process and an idea of how often it is done. We take the time to document the recommended way to complete the task. This alone makes things more efficient, since everyone now has a standard way to perform the task, and training new hires becomes much simpler.

We then rank the processes, List what is most important to least important. At this point many people are be able to save time and effort by no longer doing the less important tasks so often. The business certainly benefits by them prioritizing the most important tasks first. This has the side benefit of allowing the customer to identify which processes are worth streamlining first.

Next comes looking for ways to streamline the operation, one process at a time. Identify the most commonly performed tasks. Then look for ways to streamline those common tasks. What steps are redundant or no longer necessary? We then solicit feedback from key stakeholders; these tells us what is working, what isn’t working and how things can be improved. We then get performance data from other departments within the business/operation on what works which then leads to best practices. Or even find that steps are no longer necessary and can be abandoned. Eliminating them in the formal procedure and training everyone in the new, approved process means we’ve just streamlined that operation. If we find that the entire process is irrelevant, such as reports generated that no one reads anymore, we make certain to inform everyone involved to stop doing it.

Having a new and improved official process doesn’t matter if people don’t follow it. Once we’ve modified the processes, formally documented it and trained everyone, then we encourage the customer to audit the team to make sure they’re following the new process. Customer can also compare the way things are working relative to their ideal work stream and sometimes they may find additional areas for improvement. Workflow automation doesn’t eliminate the human element. Instead, it automates the management of tasks and moves things along the workflow, notifying those who have work to do and allowing managers to run reports to check on the status of things. Workflow automation via specialized software can facilitate communication, such as sending an email for someone to give feedback on a report or document under review. And the notes can be automatically saved for review by someone else later.

Moving to electronic records also reduces the risk that critical written information is lost. Data can be accessed any time it is necessary by anyone who should have access to it. We install software that can streamline back office operations which yields major savings in this area. For example, purchasing and integrating software to handle filing claims, sending bills/invoice and processing payments will reduce the amount of time spent on these time-consuming, detailed tasks. It also reduces the odds of mistakes that can cost the business money. Communication is often seen as an interruption in the workflow. However, opening up lines of communication reduces one’s workload long-term. For example, if customers can email a question to our technical support or the subject matter expert, that person can answer it as time allows without having the interruption and additional time required to answer a phone call. If staff can readily communicate with others when potential problems arise, they may get solutions before things get worse. Another way to improve workflow efficiency in an organization is to open up the floor to feedback from everyone involved. Ask staff members, managers, people in other departments and customers how they think things could be improved. Sometimes they can identify small changes with a major impact.

(c) 2017, MEP Digital Systems (Pty) Ltd.

Digital Merchandising Systems



One of the biggest investments made by our clients operating within the Consumer Packaged Goods (CPG) Industry is in field merchandising tools. The merchandising division is responsible for putting hands on product in retail, making sure that the product is positioned in the way that makes it most appealing to the consumer. The primary function of a Retail Merchandiser is to perform in-store audits, both standard Retail Audits (also known in the industry as Follow-Ups), and Trade Compliance Audits or Checks. These are the things which field reps perform during their audits:

Retail Audits are used to make sure that products are displayed on the shelf to ensure that customers can find the product, and when they do find it, to make sure that it appears in the most appealing way possible. Retail Audits are typically performed on a periodic basis to make sure that the product is consistently available and managed well on the retail shelf.

Compliance Audits are performed to coincide with special promotions that a manufacturer negotiates with retailers from time to time. These audits are performed to ensure that the product is being promoted as agreed, both in how it is displayed in the store, and how it is priced.

Regularity: Merchandisers visit stores on a regular basis to make sure that the product is consistently displayed to maximize sales. Any deficiencies found is brought to the attention of the category or store manager both to make sure that store personnel are aware and can make corrections when the merchandiser is not there, and to send the message that the manufacturer cares deeply about quality!

Consistency of Data Collection: Merchandisers visit retail outlets on a regular basis, the data that they collect on those visits remain consistent over time. In order to detect and analyze trends that can highlight issues throughout the retail supply chain, the data is collected in such a way as to enable easy reporting. If in-store surveys and audit questions are constantly changed, it becomes impossible to create reports that identify time-trend based issues. Best practice for keeping audits consistent is normally the use a structured data collection tool, and ideally one that includes or feeds a reporting system.

Stock level reporting: There are several types of stock in a typical retail store. There is 'on-shelf stock', which should always be full, ‘near-shelf stock', sometimes in risers above the shelf, sometimes in bins below or behind shelving. This stock is used for quick replenishment of the shelf. The final stock location is the ‘back room stock'. An excellent retail audit includes checking all locations of stock to make sure that issues that may cause out-of-stock (OOS, also known as ‘stock-outs’) are identified early. OOS causes major issues for distributors and retailers alike, and can result in financial loss. It is vital to keep an eye on the availability of your products in order to prevent OOS, and audits do help in the collection of data to better predict and prevent OOS. Best practice for reporting stock is to report against pre-set ‘High, Medium, Low’ levels rather than taking time to get exact counts.

Planogram Compliance checks: Extensive research has been conducted on planogram maintenance and the importance of planogram compliance which has informed a lot of the research that has been done into automating the process, however most CPG manufacturers still rely on physical inspections dome by merchandisers in the store. These compliance checks make sure that all of the negotiated shelf space has been allocated, and that the product is sequenced as per the planogram design, with the correct product order, on the correct shelf with the appropriate number of slots. Planogram compliance can have a dramatic impact on sales of a product; studies have shown that a doubling of facings leads to a 20% increase in sales. Best practice when it comes to planogram compliance is to separate the out of stock reporting from the actual planogram compliance reporting as the results of these inspections inform very different business issues, either one of supply chain in the case of stock-outs, or poor in-store shelf management in the case of compliance issues.

Product Condition Optimization: Merchandisers are tasked with making sure that the product appears as fresh as possible. Any damaged product, or shop-worn packaging, no matter how slight, is removed and replaced. The shelving itself, and anything that the merchandiser can affect around the shelving is clean and fully serviceable. Any issues are also brought to store management’s attention. It is impossible to overstate how important it is that the product and its presentation always reinforce a commitment to quality.

Competitive Activity Reports: Retail merchandisers are the eyes and ears of a CPG organization. While out managing the company’s products on the shelf, these foot soldiers also perform valuable reconnaissance about competitors, including how they price their product, what promotions they are running, where they are having success with market penetration. Best practices for using merchandisers to report competitive activity include having a ‘watch list’ for specific brands or products that merchandisers be on the lookout for, providing structured ‘spot reports’ that encapsulate the key bits of information that marketing is interested in about competitors, and allowing field merchandisers to provide open and unstructured feedback about what they perceive the competition to be doing in the market.

c. 2018, MEP Digital Systems (Pty) Ltd.

Audio & IoT Systems


Music and podcasts soundtrack and enrich people’s lives. Whether they’re listening while cooking, focusing, working out, or simply chilling — streaming is personal and reflects the moment. And while all of these moments are still happening, they might look different than they used to. Working might take place at the kitchen table while also juggling childcare. Cooking might be happening a lot more often. And chilling? Very necessary right now.

Ensuring that you’re reaching your target audience in the right moment — and that your messaging matches — is always important. But now that routines and priorities have changed, it’s essential to update your advertising to stay relevant and sensitive to your audience. As a business, ask: How can we show up in useful, non-disruptive ways right now? It starts with being hyper-sensitive to the context in which your message will be received and how that moment is different now. The message you sent to your listeners when they were working out in a gym might not strike the right tone when those listeners are getting in reps from the living room. The streaming generation is savvy, and they might be especially critical right now on how businesses and brands rush to weigh in on the current event.

Being culturally relevant doesn’t necessarily mean addressing events head-on. It's about tailoring messaging to a personal listening moment within the context of a larger cultural moment. What sounds clever under normal circumstances might sound insensitive right now, especially in an ad. People lean on audio to fill very specific needs: to stay informed, grounded, and entertained. Businesses can play a role in filling those needs by easing up on the hard sell, and focusing on providing something useful. What information can you give that they’re looking for? What’s your business doing differently right now to help people? What kind of practical wellness advice, home hacks, or parenting tips could your brand provide? Audio is a uniquely flexible format that can be produced remotely and executed quickly.


Our IoT gateways offer connectivity for both brownfield & greenfield environments. Application-ready building blocks are available to speed time-to-market. And we build intelligent middleware for remote monitoring into all of our boards and modules. Our rugged specifications are ideal for meeting the extended lifecycle requirements of industrial applications. And our standards-based design ensure system compatibility and solution scalability.

The Internet of Things (IoT) is one of the most exciting phenomena of the tech industry these days. But there seems to be a lot of confusion surrounding it as well. Some think about IoT merely as creating new internet-connected devices, while others are more focused on creating value through adding connectivity and smarts to what already exists out there. We would argue that the former is an oversimplification of the IoT concept, though it accounts for the most common approach that startups take toward entering the industry. It’s what we call greenfield development, as opposed to the latter approach, which is called brownfield.

In software development, greenfield refers to software that is created from scratch in a totally new environment. No constraints are imposed by legacy code, no requirements to integrate with other systems. The development process is straightforward, but the risks are high as well because you’re moving into uncharted territory.

In IoT, greenfield development refers to all these shiny new gadgets and devices that come with internet connectivity. Connected washing machines, smart locks, TVs, thermostats, light bulbs, toasters, coffee machines and whatnot that you see in tech publications and consumer electronic expos are clear examples of greenfield IoT projects.

Greenfield IoT development is adopted by some well-established brands as well as a lineup of startups that are rushing to climb the IoT bandwagon and grab a foothold in one of the fastest growing industries. It is much easier for startups to enter greenfield development because they have a clean sheet and no strings attached to past development.

Again, to take the cue from software development, brownfield development refers to any form of software that created on top of legacy systems or with the aim of coexisting with other software that are already in use. This will impose some constraints and requirements that will limit design and implementation decisions to the developers. The development process can become challenging and arduous and require meticulous analysis, design and testing, things that many upstart developers don’t have the patience for.

The same thing applies to IoT, but the challenges become even more accentuated. In brownfield IoT development, developers inherit hardware, embedded software and design decisions. They can’t deliberate on where they want to direct their efforts and will have to live and work within a constrained context. Throwing away all the legacy stuff will be costly. Some of it has decades of history, testing and implementation behind it, and manufacturers aren’t ready to repeat that cycle all over again for the sake of connectivity.


IoT is growing rapidly, but lingering connectivity, security, and data storage concerns will need to be resolved to guarantee its continued flourishing. There’s no question that the Internet of Things (IoT) is going to shape the future of virtually every industry. Like the personal computer, the internet, and cloud computing before it, IoT has the potential to kickstart a massive wave of corporate change. A survey found that 62% of corporate leaders believe the IoT’s impact on their industry will be either “very high” or “transformative.” The source of this broad-based enthusiasm is no mystery. From capturing new streams of data that can be funneled into increasingly mature analytics platforms to facilitating the automation of a range of routine processes, strategically integrated IoT devices will significantly elevate many companies’ bottom lines. However, to realize the paradigm-shifting potential of IoT, stakeholders need to find solutions to three major issues that continue to stand in the way of widespread IoT adoption: adequate connectivity, device security, and excessive data storage and processing requirements.

Connectivity is the very essence of IoT. While most IoT devices are compatible with both wired and wireless connections, the latter is currently the more popular choice. Unfortunately, this popularity has created serious bandwidth concerns. Consider this: As of 2018, there were just over 17 billion internet-connected devices in use worldwide. By 2025, this number is projected to balloon to more than 55 billion. Tripling the number of internet-reliant devices in the span of just seven years will place immense pressure on the world’s existing networking infrastructure.

Right now, 5G is the most promising solution to this connectivity challenge. The newest generation of mobile networking, 5G is a revolutionary technology that promises to provide more bandwidth and faster connection speeds than its predecessors. Encouragingly, some 5G networks are already being rolled out around the world, and 5G connectivity is likely to be widely available within a year or two. IoT stakeholders are testing a host of other possible solutions, as well. In Russia, telecommunications companies are using low-power, long-range wide area networks to improve IoT connectivity in the remote expanses of Siberia. Further, in certain circumstances, the strong connectivity delivered by the Bluetooth wireless communications standard will be able to lessen the burden placed on more traditional networks, though the standard’s limited range will preclude it from becoming an IoT connectivity panacea. That said, in a spatially-condensed context, such as a “smart home” ecosystem, Bluetooth represents an attractive short-term solution.

IoT deployment and security concerns

Cybersecurity is another major obstacle to IoT adoption. As a growing number of devices have come online, lawmakers, the FBI, and private cybersecurity professionals have all expressed concerns about the serious threat posed by hackers. As things stand, the best way to bolster the security of IoT is to follow all regulations and/or advice promulgated by both lawmakers and IT professionals. In practice, this means IoT device manufacturers will have to either assign a unique password to each device they sell or prompt users to create their own unique password before they connect their device to a network. While this degree of precaution is only legally mandated in some US states, it’s the kind of protocol to which every IoT manufacturer should strive to adhere. Similarly, after discovering that cybercriminals routinely exploit IoT devices to extract login credentials, infiltrate networks, steal intellectual property, and more, the FBI issued a public service announcement featuring a set of IoT security best practices. In addition, the FBI recommended rebooting devices regularly to clear malware residing in a device’s memory and patching IoT devices as soon as security upgrades are made available. Implementing strong encryption protocols and utilizing VPNs can also prevent malware that successfully compromises a single IoT device from impacting an entire organization.

IoT deployment and capacity demands

In many ways, data is at the root of both IoT’s connectivity and security issues. IoT produces a massive amount of data -- after all, the value propositions of many IoT devices rest with the devices’ ability to constantly monitor and interact with their environments -- and few stakeholders are adequately equipped to store and secure it. To accommodate the influx of IoT-generated data, companies will need to find ways to increase their current server capacity. Edge computing (for example, running data through micro data centers located close to a network’s edge, will allow companies to steer roughly half of the data generated by IoT devices away from their existing data centers. But supporting a robust IoT infrastructure will inevitably require a diverse portfolio of micro data centers, traditional proprietary data centers, and on-demand cloud computing resources. IoT clearly has remarkable potential. Provided industry stakeholders make a concerted effort to facilitate strong network connectivity, improve device security, and expand data storage and processing capacity, this potential is likely to be realized sooner rather than later.

(c) 2019, MEP Digital Systems (Pty) Ltd.

Content Storage Systems


Network-attached storage (NAS) is a file-level (as opposed to block-level) computer data storage server connected to a computer network providing data access to a heterogeneous group of clients. NAS is specialized for serving files either by hardware, software, or configuration. It is often designed as a computer appliance – a purpose-built specialized computer. NAS systems are networked appliances which contain one or more storage drives, often arranged into logical, redundant storage containers or RAID. Network-attached storage removes the responsibility of file serving from other servers on the network. They typically provide access to files using network file sharing protocols such as NFS, SMB, or AFP. From the mid-1990s, NAS devices began gaining popularity as a convenient method of sharing files among multiple computers. Potential benefits of dedicated network-attached storage, compared to general-purpose servers also serving files, include faster data access, easier administration, and simple configuration.........some NAS versions of drives support a command extension to allow extended error recovery to be disabled. In a non-RAID application, it may be important for a disk drive to go to great lengths to successfully read a problematic storage block, even if it takes several seconds. In an appropriately configured RAID array, a single bad block on a single drive can be recovered completely via the redundancy encoded across the RAID set. If a drive spends several seconds executing extensive retries it might cause the RAID controller to flag the drive as "down" whereas if it simply replied promptly that the block of data had a checksum error, the RAID controller would use the redundant data on the other drives to correct the error and continue without any problem. Such a "NAS" SATA hard disk drive can be used as an internal PC hard drive, without any problems or adjustments needed, as it simply supports additional options and may possibly be built to a higher quality standard (particularly if accompanied by a higher quoted MTBF figure and higher price) than a regular consumer drive.

NAS is useful for more than just general centralized storage provided to client computers in environments with large amounts of data. NAS can enable simpler and lower cost systems such as load-balancing and fault-tolerant email and web server systems by providing storage services. The potential emerging market for NAS is the consumer market where there is a large amount of multi-media data. Such consumer market appliances are now commonly available. Unlike their rackmounted counterparts, they are generally packaged in smaller from factors. The price of NAS appliances has fallen sharply in recent years, offering flexible network-based storage to the home consumer market for little more than the cost of a regular USB or FireWire external hard disk. Many of these home consumer devices are built around ARM, PowerPC or MIPS processors running an embedded Linux operating system.

We just happen to have a few of hard drives laying around which we've put to good use with a Raspberry Pi by creating our own, very cheap NAS setup. The current setup is two 4TB hard drives and one 128GB hard drive, connected to the network and accessible from anywhere using the Raspberry Pi. a Raspberry Pi. Models 1 and 2 work just fine for this application but we got a little better support from the Raspberry Pi 3. With the Pi 3, we were still limited to USB 2.0 and 100Mbps via Ethernet. However, we were able to power one external HDD with a Pi 3, while the Pi 2 Model B could not supply enough power to the same HDD. The Raspberry Pi NAS, we currently have one powered 4TB HDD, one non-powered 4TB HDD and a 128GB flash drive mounted without issue. To use a Pi 1 or 2 with this, we considered using a powered USB hub for your external drives or using a HDD that requires external power. Additionally, we used a microSD card -- 8GB is recommended -- and the OpenMediaVault OS image. Contact MEP Digital Systems for a consultation today.

Product Deployment


Almost every mid-sized office system we've worked on needed some kind of method to clone and deploy specific OS configuration and application sets. PCs requirements were the same, deployment of the same configuration with little or repetitive work. The ideal target being what Microsoft calls “zero-touch” deployments that required no interaction on the target computer whatsoever. This we offered using Microsoft System Center (SCCM) along with the Deployment Toolkit (MDT).

Many shops do not operate that way, and have some level of interaction required during the imaging process. During deployment, the technology deploys and boot up Windows on dissimilar hardware and spare a technicians the task of configuring a new master system for each make of hardware requiring OS deployment.

A system disk image is then deployed easily on the hardware where it was created or to identical hardware. However, if there is a change of a motherboard or use another processor version, the deployed system may be unbootable. An attempt to transfer the system to a new and more powerful computer, will usually produce the same result as the new hardware is incompatible with the most critical drivers included in the image. We use deployment technology that provides an efficient solution for hardware-independent system deployment by adding the crucial hardware abstraction layer (HAL) and mass storage device drivers.

Firstly, our virtual machines provide the option to create hardware-neutral images which can be applied anywhere, regardless of what is actually in the target computer. One image becomes possible for multiple hardware configurations. This also involves less work in maintaining the image as any work only needs to be done once and not x-times per different type of hardware. Secondly, most virtual machine software we use have the ability to save a VM’s state, and revert back to that state, should it become necessary. VMware calls these “snapshots”, and Microsoft uses the term “checkpoint” in Hyper-V. Should a screw-up occur, it can be undone without loosing work or have to re-do everything. These are two facets that are simply not available with building images on real hardware. Test on real hardware, but build in a virtual environment.

The build VM workstation has to have some power to it. Nothing extravagant but it must be above average. We avoid using a laptop as a VM build station. Laptops are great for testing, but a desktop PC is optimal. A quad-core CPU (Intel Core i5/i7, or AMD Phenom series) and the more powerful, the better. RAM is the key. The more the better. We recommend working with 16GB of RAM on the workstation, and it can handle three running VMs on the host OS - VMs take up storage space quickly. Working on several VMs, it is not difficult to fill a 2TB. The VM server’s host OS should be as lightweight as possible. It needs to host a hypervisor and not much else. The more software we add to the host, the more packages we need to keep up to date to have a stable server.

(c) 2018, MEP Digital Systems (Pty) Ltd.

Product Development Process


When we commit to developing products from the ground up...... We get our team to work in order to get a prototype off the napkin and in the customer's hands — how do we do that? Well, We’ve chosen to invest in rapid prototyping because it allows us to create tangible products from our computer-aided (CAD) files, as opposed to just 2D drawings, with a relatively quick turnaround time. The major stages we cover include:
- The product requirements document (PRD)
- The prototype components and the process for creating each
- Planning the project budget and timeline accordingly

These are based on many years of installation, repairs, creating custom-made fittings, including prototypes, and a few years of working with designs. Most customers always want to know the “ballpark figure” or the cost of their project after briefly telling us about their idea. The reality is that product development work involves all sorts of design — simple and complex. Because no two projects are alike, therefore a detailed product requirements document (PRD) is required to get the process started and, of course, to even get a quote.The PRD sets the tone for how the product will be designed and manufactured. We like to think of it as a list of the non-negotiables of the product. It’s in the PRD that we include the following:
- Your main product application — where it’ll be used, how and by whom.
- The minimum or maximum size the customer would like the product to be.
- How long the battery should last (think of the use cases for your product)
- Whether or not it should be waterproof, weather resistant, fireproof, etc.
- Accuracy of any feature the customer may have (the more accurate they want something to perform, the more expensive it gets).
- Any integrations — how it should work with other systems.

Let’s say a customer has commissioned a custom-build standalone or remote smartlock system. A lot of time will be spent picking parts with a lower power consumption and optimizing it for a longer battery life. For a device like an indoor digital display, on the other hand, a customer wouldn’t really be concerned with battery life since it will have to be plug into the wall socket. These are the types of items that are well documented in the PRD. Our design team then gathers the information in the PRD and start designing the product around these constraints — along with what will be realistic based on a customer's budget and available technologies. We take the information in the PRD and make into a statement of work (SOW), which will outline project-specific activities, timelines, deliverables and other payment-related information. The last important thing in the PRD is that it’ll cover each of the components of a prototype.

Most hardware products we design usually consist of four different parts:
1. An enclosure made of plastic, metal or another material
2. A printed circuit board or other electronic components
3. Firmware, or the code that runs on the electronic device
4. Software, or the code that runs on the computer or phone to interact with the newly developed hardware

However, some prototypes don’t always have all of these parts. For instance, a remote control case would only have a plastic component and none of the others would matter. Or maybe a customer wants to launch an educational development kit (such as the Arduino), so there may be no really need for a case or software. In any case, The hardware enclosure design is usually a two-step process.

The first step is to have an industrial designer sketch out several different concepts of what the product could look like based on what it’ll be used for. The sketch is often done by hand but our industrial designers use software. Several sketches may be required in order to ensure that the hardware enclosure not only meets a customer's vision but is feasible for manufacturing. We usually do four or five sketches to gather feedback before settling on something the customer really wants. If a customer is still in the initial steps of product development and only wants a minimum viable product (MVP) or a rough prototype to analyze its use cases, we may skip refining the drawing and just work from an initial sketch. At this point, functionality is more important than aesthetics and adding industrial design to the mix would just be more costly and lengthy for someone who just wants it for testing purposes.

After the sketch is finalized, a CAD designer will create a model for the prototype using SolidWorks, AutoCAD Inventor, Pro Engineer or Catia. It’s in this step that the engineer will specify tolerances, fittings, assemblies, DFM (design for manufacturability) features, etc. The 3D drawings usually take a lot of time to model. It can take three to four weeks or several months depending on the complexity of the product.

The end result will be a set of different types of files usually composed of 3D CAD files, 2D drawings for manufacturing, and any bill of materials (BOM) for off-the-shelf parts that go into the product. The actual 3D printing or CNC of the prototype, based on these files, will happen after the design phase is complete and then the next phase begins which is the design of the circuit board, or the brains of the product, also often follows a two-step process.

The first step is the research and development of the product. Some of the products we work on are very innovative and have never been done before. That means it’s hard for us to estimate how long it would take to prove out the concept before trying to design it. The second step is a proof of concept (POC). Working in uncharted territory usually results in a proof of concept that will test your product’s function and technology — but it’ll look nothing like the final product as we’ll most likely use breadboards and off-the-shelf electronic parts. The POC's only purpose is to make sure that the product idea is doable using what is technologically feasible at the present moment. We’ll use breadboards, microcontrollers, sensors, jumper wires and other electronic components in order to get a POC.

Yet, a lot of our products skip the POC if the product involves technology we know will work. If that’s the case, we’ll usually jump straight to the circuit board design. We always do the circuit board design in tandem with the enclosure design, unless the product does not have an enclosure. Unlike the enclosure design process, however, the circuit board design normally takes longer to make once the first design is ready. This is because the circuit board prototyping process is very slow. Not only do we have make a POC, but once it’s done we have to design the PCB on the computer, make the bare circuit board, create a stencil for the board and order all the components before we actually assemble the final pre-production prototype and test to ensure the design works.
It’s not like we can just go to a 3D printer and hit “print.” Every time we have an error we need to address, we have to make a new version, make a new bare board, populate it, program it, test it, rinse and repeat. This process can take between one and three months.

At the end of the circuit board design stage, we end up with a Gerber File that shows the schematics of the board and a Bill of Materials with the components that will be populated into the board. Most of the time we also deliver one finished, assembled prototype that the customer can now test in its environment before going into production or making changes.
Then the firmware which is software that gives the product life. To design it, we basically have to convert the product requirements into code. For example, if a customer want a blue light to come on when the device is connected, we need to program that feature. Turning on an LED is easy, but add 50 other things that need to happen in a multitude of scenarios, that is when we run into some pretty complex problems. And most programmers will know that these firmware design problems can take forever to figure out......Estimate several months if not a year to work out all the firmware issues. We usually program everything in C but have also written in C++, Python and many other languages.

The last part of many modern hardware products is the software, a program that allows the hardware to send and receive data over a connection while displaying it in a usable way. This is usually a program that runs on the computer — or on the web — or an application that runs on a phone. Fitbit, for example, uses a wristband with a microcontroller, accelerometer and battery to send the step count to the phone. The application on the phone converts this data into useful information. By the way, the application can be used to configure how often the band reports information to the phone to conserve battery life. But not all products require a software application to work — think of a Bluetooth speaker. If that’s the case, they only need firmware development. We would probably spend even more time on software development than we would on firmware development, mostly due to user interface (UI) and user experience (UX) design. Although they may bring hefty costs, investing in UI/UX will certainly help any application in being more effective, user-friendly and aesthetically pleasing — especially if the software is the part where the end-user will mostly be interacting with the product.

Now comes the most asked question: how much should the customer budget for the hardware prototype design? Well, Let’s say customer wanted a digital display and they could spend X, Y or Z amount for the full product design. All of these would get them a digital display prototype, but the difference between them would be the earth and moon apart. In the end, the more funding a customer has for rapid prototyping, the more engineering time we can spend on it and the better the product can get. With that said, we encourage customers to set their prototyping budget so that our design team can help optimize it to get the best results because we are all
about disruptive technologies.

Re-Purposing Products

When clients upgrading their routers, we became creative...... What we do with these old routers? In the case of clients switching ISP, they’ll often be asked to return the older device. But when a spare router is kicking around the place, we have several ways to reuse/re-purpose these devices.

1. Wifi Repeater
When the Wi-Fi network doesn’t extend across the full range of facility..... Although might might opt for powerline Ethernet adapters, adding a second router into the mix becomes a good alternative. This means connecting the old router to the new wireless network, using the Wi-Fi signal. It can then share access to the Wi-Fi network, giving greater coverage. Although there may be some latency issues, overall this is a quick and easy way to extend your wireless network. It has various uses, from giving better Wi-Fi access to a remote part of the facility, to letting clients stream video/cctv footage,etc to tablet while they are within the facility.

2. Guest Wi-Fi Connection
When clients are having people regularly dropping in and they allow them to use their wireless network, why not give those people their own network? This is like the wireless repeater project, but with a twist. The router connects to your existing, password-protected network, but gives password-free access to new devices. This will use the guest network feature of your old router, which will by default prevent guests accessing other devices on your network. If this level of security isn’t enough, we then address the firewall settings on the main router to adjust.

3. Wifi Radio Streamer
When clients want to enjoy their own radio stations on their wireless network? Some routers can be configured to play internet radio, we install the OpenWrt or DD-WRT custom router firmware. Some other software is also required, and we’ll also need a USB soundcard to output audio. While this isn’t an easy build, and plenty of other internet radio options are available, this is still a great project. It gives insight into the power of custom firmware, as well as an appreciation of how music is streamed across the wireless network. However, we have build one without a fuss using our Raspberry Pi smart streaming speaker project which is a good option.

4. Network Switch
Most routers don’t have more than six Ethernet ports. With the increase in wireless technology, this figure might even be as low as four. But with a clear need for devices to be connected over Ethernet, we often run out of ports. For example, home appliance monitoring devices, TV decoders with smart TV functionality, games consoles, and more might have no wireless networking. They need a physical connection to a network, and that means Ethernet. We add network switches when we run out of Ethernet ports. This is basically the Ethernet version of a mains power bar, with the additional ports plugged into one port on the router. The old router typically has four or more ports, so connecting will instantly increase the number of ports available. We also power up the old router and also disable wireless networking on the old router, to avoid conflicts.

5. Wireless Bridge
When a new router is wireless only? Even when the ISP didn’t offer a router with Ethernet ports, or a client uses a cellphone to connect to the internet but requires an Ethernet port. Either way, when there is a need to connect Ethernet devices to the network, a wireless bridge is the answer. While inexpensive, an old router can be repurposed as a wireless bridge. This works a little like a wireless repeater, but rather than share the Wi-Fi connection, the wireless bridge offers Ethernet connectivity. The old router is then connected to an existing Wi-Fi network, and its Ethernet ports used to connect devices requiring Ethernet.

6. Smart Home/Office Hub
Some routers have some useful additional ports. In some cases, this might be a USB port, which makes flashing OpenWRT or DD-WRT router firmware easy. Other devices might come with a serial port, and these routers can be repurposed as a home automation server. Basically, the router runs a web server that is connected with an internet browser. This might be on a PC, or for convenience, through a smartphone. We use this with an Arduino (Microcontroller) hooked up to the router, and some RF-controlled power switches, to create a basic smart home/office setup. While easier options are available, we use this to get a better understanding of home automation in our workshop.

7. NAS Drive Router
When clients are looking for ways to store data on a single storage device and access it from anywhere...... We offer them Network Attached Storage (NAS), which is basically a hard disk drive that is attached to a network. While our NAS devices are affordable enough, with an old router hanging around, we continue to save money. Note that this is limited to routers that can run custom firmware (like DD-WRT) and a spare USB port, and routers that allows the browsing of the contents from any connected USB devices. Without USB port, there’s no way to attach the hard disk drive or USB flash storage. Once set up, our now custom-built NAS gives the client instant access to important data from anywhere using any device.....These are some of the great ways to repurpose old routers, and even if a router is really old and misses some key modern wireless networking features, we can still use it as a switch, or even a guest network.

Content Analytic Platforms

One of the huge upsides in the digital distribution economy is access to data. Content creators have more tools for tracking their content...