Everworks logo

System Integrator Uses Thin Client Technology to Centralize Control of Disparate Machines and Systems

Growth, for any business, is always a good thing. But even good things have challenges. When a growing food and beverage customer found that too many new machines combined with their existing controls system were making work harder, they called on Everworks, Inc. to streamline  their facility.

The customer had acquired multiple machines and new automation equipment as part of their growth strategy. Each new machine came with an OEM solution that is typically a stand-alone FactoryTalk View ME PanelView or SE Local Station. The SE instances ran on industrial computers that required continuous updates and expensive maintenance. What the customer needed was a strategy to integrate 4 OEM systems with their existing native controls to create one modern, fully automated, large food processing line. [...]

Stratus Technologies on the Cutting “Edge”

Stratus Technologies, a longtime partner of ThinManager, has developed and released a new server, the Stratus ztC Edge.  Stratus supported the ThinManager platform when it was produced by ACP and now as part of Rockwell Automation line of products.  We are pleased to share this release information and welcome an amazing new piece of technology to the industrial ecosphere.


A Touchy Subject

: Touchscreen Technology Bringing the World to Our Fingertips


They can bring you a world of information and services at the touch of your finger. Touchscreens on electronic devices allow almost anyone to control and operate digital gadgets with a mere tap. Let’s take a look at how touchscreens work and the ways they are changing our present and our future.

How Touchscreens Work

There are two basic types of touchscreens: resistive and capacitive. (1)


Resistive touchscreens are the most common type of touchscreens. They’re typically found at ATMs and kiosks that distribute movies on DVR, etc.

How it works

The screen consists of two electronic layers. Press your finger on the screen and the flexible top layer (glass or plastic) touches the one on the bottom, creating an electronic current that sends a message to the software inside.


Capacitive screens are generally found on the devices we now use everyday: smartphones, electronic tablets, GPS navigation, etc.

How it works

Capacitive screens have electric sensors, which react to the natural AC electric current that runs through the human body. When a command is selected, the device’s AC current combines with our own AC current and the circuit is completed. Microcontrollers complete the command.

Brief History of Touchscreens

How have we gotten in touch with this technology? (2)

1965 – First finger-driven touchscreens invented by E.A. Johnson

1970 – Dr. G. Samuel Hurst invents the first resistive touchscreen

1982 – First human-controlled multi-touch device developed at University of Toronto

1983 – Hewlett Packard releases the HP-150, one of the first touchscreen computers.

1993 – The first touchscreen phone, the Simon Personal Computer, is launched by IBM and BellSouth. Apple also released the Newton touch-sensitive PDA

2002 – Sony SmartSkin introduces mutual capacitive touch recognition

2008 – Microsoft introduces the Surface tabletop touchscreen. It could recognize several touchpoints at the same time.

2011 – Microsoft and Samsung introduce PixelSense technology, in which an infrared backlight reflects light back to sensors that convert it into electronic signal.

At Your Fingertips

There are a growing number of electronic devices that use touchscreen technology, with many more uses on the horizon (percentage of Americans using devices in parenthesis): (3, 4, 5, 6)

Smartphones (64%)

Tablets (42%)

eBook readers (32%)

Portable game devices (35%)

Automobiles/GPS navigators (30%)

Computer monitors (78%)

Shipments of touchscreen panels for devices (7) 

2012: 1.3 billion

2013: 1.8 billion

2016*: 2.8 billion


Touching on the Benefits … and Drawbacks


There are distinct advantages to touchscreen technology. (8, 9)


Touchscreens are faster. With a trackball or mouse, the user has to locate the cursor, then position it to compete a task. Touchscreens react instantly to the point of contact.

Ease of use

Touchscreens are more intuitive, allowing the user to simply point to control it.


Touchscreens allow devices to become smaller, since they combine data entry with the display. No need for keyboards, cords, etc.


Touchscreens make it easier for those with disabilities to use computer technology. For example, people with mobility issues can simply use a finger or a stylus.


Touchscreens can allow artists to “draw” directly on the screen.


Touchscreens, however, do have some drawbacks. (10)

Human touch needed

Touchscreens require direct contact from the user’s skin to activate. Therefore, the devices cannot detect touches from users wearing gloves (though special touch-sensitive gloves have been developed).


Most touchscreens (phones, tables, etc.) are used vertically, unlike computer screens. Some think that makes computer screens incompatible with touchscreen technology.

Method of usage

Personal touchscreens devices are generally held close to the user. Computer screens are usually several feet away. The user has to reach to use them and arm fatigue can set in.

Wear and tear

Bigger touchscreens mean shorter battery life. Also, they may be hard to keep clean.

Future of Touchscreens

While touchscreen technology has made smartphones and tablets widely popular, new devices using touchscreens are on the horizon: (11, 2)

Home appliances

Touchscreens on refrigerators and washing machines/dryers can give vital information and even deliver the day’s news!

Video games

Some video game makers are already ditching button controllers for touchscreens that allow players to tap their way to victory.


Researchers are working on touchscreens that use “microfluid technology” to create buttons that “rise up” from the surface and touchscreens in 3-D.


Source: http://www.computersciencezone.org/touchscreen-technology/ [...]

What Is ThinManager?

I am sure many professionals in numerous industries can relate with the resulting hardship from this seemingly simple question.

As with most high tech companies, the answer is not always so easily explained without creating even more questions.  “We are the global leader in thin client management and industrial mobility solutions”.  Sure, it rolls off the tongue, but does that explain ThinManager to someone that does not know about factory automation, thin clients or industrial solutions?

We have developed a new video to quickly and concisely illustrate an overview of what ThinManager is and does.  This is the first in a new series of videos offering a more “in-depth” look in to the core functionality of ThinManager.


MailBag Friday (#43)

Every Friday, we dedicate this space to sharing solutions for some of the most frequently asked questions posed to our ThinManager Technical Support team.  This weekly feature will help educate ThinManager Platform users and provide them with answers to questions they may have about licenses, installation, integration, deployment, upgrades, maintenance, and daily operation.  Great technical support is an essential part of the ThinManager Platform, and we are constantly striving to make your environment as productive and efficient as possible.


Could you please advise me what configuration is required on a Windows PC to allow for a WinTMC session to span multiple monitors? We found the configuration on the thin server but still can’t get the WinTMC session to maximize across both monitors on the client system.

It is almost the same process as setting up MultiMonitor for a thin client.  In the terminal configuration wizard select Enable MultiMonitor, select the resolution of your monitors, and setup the layout of the sessions.  If you choose the resolution that your monitors are running at it should open full screen across both monitors once it’s received its configuration. [...]

Monthly Integrator Spotlight

The Desire for Virtualization Drives ThinManager Centralized Management in Ireland


For decades, industrial automation in North America has seen consistent growth in both volume as well as technological development.  In many parts of the world, however, there has been a slower adoption rate of these new innovations in the manufacturing sector.  One such innovation, Virtualization, has forced companies to reexamine their stance on waiting to adopt new technologies.

Enter NeoDyne, located in Cork, Ireland.  Specializing in creating production performance monitoring and process improvement solutions, this fifteen year old system integration firm has plunged into these new technologies head first as they continue to modernize facilities across Ireland.  One of their current deployments is a Manufacturing Information System (MIS) at the main processing facility owned and operated by a global leader in the cheese and whey protein market.

Martin Farrell, the Automation Director at NeoDyne, explained to us how their customer’s desire to virtualize brought them to adopting the ThinManager Platform.  “The IT managers at their main facility had been trying to support all their standalone SCADA systems for the last 20 years.  Very early on in the planning process they decided they wanted a virtualized environment without physical servers sitting on the plant floor.  They were looking to bring all of their applications into a VMware virtualized environment that would deliver their applications to the plant floor.  At that point we advised them to adopt thin clients to replace their standalone PCs.”

It became obvious to the team at NeoDyne that updating the network infrastructure while implementing a Wonderware System Platform based architecture in a virtualized environment was going to require a centralized management solution.  “We had limited experience with Terminal Services and thin clients before this deployment but knew that this was the direction we had to go.  We did research, read a lot of articles and found ThinManager. It has turned out to be a fantastic product that allowed us to tie everything together.  It is an intuitive software platform that makes configuration and management easy,” said Martin.

Once deciding to implement all of these platforms, the first step was to tackle the challenge of integrating them with their planned thin client deployment.  Because the facility had three separate areas for cheese production, ingredients, and utilities, they decided to construct a network architecture using dual redundant I/O servers in each area from Solutions PT.  “The facility is still very much a Unit based production outfit.  Even though it is an automated plant, they still have a model of individual units and each unit requires its own dedicated control room. At the control level, it was a mix and match of everything, but we did put together a very organized structure on what had previously been a disparaged group of PLC platforms from Siemens to Allen-Bradley to Mitsubishi to ABB. ”

Once the architecture was firmly in place, Martin and the team at NeoDyne started applying a multitude of ThinManager Platform features to simplify everything and make the system more efficient.  “We have 2 ThinManager Servers set up in a mirrored configuration so if a thin client for any reason fails to connect to one, it automatically connects to the other. If we go home at the end of the day and one of the servers fails, we know they won’t be in the dark and we have time to get it back online with minimum disruption to the operations.”

Martin then explained some of the other ThinManager Platform features they decided to take advantage of.  “We deployed AppLink to deliver client sessions so the operators only have access to the InTouch Application without having to click around a Terminal Services session via the desktop to get to their SCADA application.  We also use it to launch a particular application so if the session crashes it will automatically reboot the application.”

NeoDyne, like many others, also uses the ThinManager Shadowing feature to provide off-site assistance by being able to remotely log into an operator’s user session and guide them through problems in real time without having to be in the facility.  However, they are also using it to reduce licensing overhead costs for their customer as well.  “Something else we have done in the facility is to allow an operator to go between two control rooms without using additional licenses via the Terminal to Terminal Shadowing feature.  We can have one session that is shadowed between two control rooms and depending on which Control Room he is in, he can take ownership of either application.  These “part time” thin clients that are used infrequently can identify as a shadowed thin client to avoid purchasing additional licenses to maximize cost efficiency.”

Their next goal was to find a way to allow the facility managers and supervisors to view the application without needing to travel through the facility.  ThinManager WinTMC made that a simple task without having to complicate the proposed facility network architecture.  “The main benefit of deploying the WinTMC feature is that it allows the plant managers and supervisors to access the application session on their desktop PCs to monitor plant performance and switch back and forth without needing additional hardware.  Also, as the PCs on the floor die they can replace them with thin clients instead of PCs as part of a continuing maintenance budget.  It gives them flexibility on how and when they buy more hardware.”

Now, more than a year since they began this project, we asked Ciaran Murphy, Automation Systems Lead for NeoDyne, how further deployment of ThinManager has been unfolding at the facility.  “What we have found is that using ThinManager to manage the thin client setup and the actual thin clients themselves down on the plant floor is allowing us to gradually retire more and more of their SCADA clients.  Their existing control systems and standalone SCADA systems are actually still there.  We put in a Manufacturing Information System (MIS) over the top to analyze plant performance.  Now, having seen the benefits of this technology, they will gradually replace their existing Thick clients on the Plant Floor with Thin Clients.  Over the next few years they will continue to replace the rest of the standalone PCs on the floor and just be left with thin clients.”

Now that ThinManager is efficiently driving the facility systems, we wondered what is next for the team at NeoDyne.  Ciaran was more than happy to tell us.  “With the success of this project, we showed other clients what ThinManager could do and are already deploying it into another ongoing project we are involved in with another major dairy here in Ireland who was impressed by the product.  Going forward, all of our future platform solutions for the next 5, 10, 20 years will have ACP ThinManager Platform as a fundamental part of our standard system designs and we will actively propose it.  To us there isn’t even a choice; if the site allows it, we will use ThinManager.”


ABOUT NEODYNE:  NeoDyne Plant Information Management solutions enable end user Lean Manufacturing, Process Performance Improvement, Overall Equipment Effectiveness, and Cost Improvement business transformation initiatives. The NeoDyne solution is specifically tailored for milk / food processing and combines features to manage and provide traceability for food batch manufacturing in continuous/batch processes. Plant automation and Quality/LIMS and ERP systems are joined into one unified solution.



To review cost savings of using the ThinManager Platform, visit our ROI Calculator here.

To read about successful ThinManager Platform deployments, visit here.

To see when the next ThinManager 2-Day Training Session is being offered, visit here.

Does Virtualization Really Provide ROI?

Heading into 2013 the hot topic around the IT water cooler seems firmly focused on the “Virtualization War” being waged between Microsoft and VMware.  There are arguments to be made on both sides of the debate and who will come out on top is anyone’s guess at this point.  Microsoft has a history of coming to market late with a product, only to devour the entire market with a low price point.  While Microsoft looks to finish buying out the current Virtualization Market, VMware has seemingly decided to ignore Virtualization as an “end game” and continues to push past it towards full cloud adoption to compete as a service to rival Microsoft Azure.

As is often the case in these full scale technology wars, it is often the needs and wants of the end user that are ignored while an entire industry stays solely focused on a battle between goliaths.  Regardless of who manages to lure away each other’s executives, or which smaller firms are bought out to provide a deeper arsenal and better market positioning, it is our contention that regardless of who wins, ThinManager is still a superior technology for managing virtualized environments.

As noted in our previous article comparing ThinManager to Citrix, as well as the discussion about where real return on a thin client system can be found, we must begin with the often ignored factor of a platform being able to manage the end device.  While widely regarded as an industry leader in the advancement of managing Virtualized environments, VMware View falls short of the mark as it relates to client management due to its managed images needing clients that require an operating system.  Any platform that requires an end user OS also requires time and resources being expended to keep those devices updated and operating properly.

So while VMware and ThinManager both operate on a “one to many” ratio, ThinManager requires far less resource and hardware management, which is a seemingly forgotten piece of the puzzle when determining ROI.  After all, true ROI must be measured by the total cost of labor hours as well as hardware and licensing costs.

Taking this comparison one step further, the Centralized ThinManager solution continues to pay bigger dividends every time an end device is replaced or added to the architecture of a facility. In a ThinManager environment, that device only needs to be plugged in, and it is instantly configured as the configuration is maintained at the server.  In a VMware environment, that device has to be manually configured before it can share data and deliver applications, requiring even more resource expenditure before being operational.  And the more clients you have, the more time will have to be spent performing maintenance and configuration in a VMware environment.

Yes, there are things that VMware brings to the table that are proprietary and unique to their platform such as PCoiP Protocol.  However, it is an equivalent trade off, at best, for its lack of RDP and ICA Protocol.  Add to that VMware lacks tiling support, client-to-client shadowing, session shadowing, MultiSession, and inability to connect to other non-VMware virtual machines across a network and one must begin to question the assertion that VMware is the answer to all things related to managing a virtualized environment.

Lastly, the lack of VMware support for both applications and management of Terminal Services / Remote Desktop Services (both of which are supported by ThinManager) creates a very large problem as most network facilities in the modern industrial landscape use Windows Server in some capacity.  It is one thing to compete against Microsoft in the battle for virtualization dominance, but it is another thing entirely to ignore the most widely adopted and used operating system on the planet.

While VMware and Microsoft are locked on each other with tunnel vision to the exclusion of the rest of the global landscape, there are more and more viable options for creating an efficient and well managed virtual environment…and ThinManager is on the top of that list.   After all, true Return on Investment comes from the money and resources you don’t have to expend while ensuring the least possible downtime for your facility operations.



To use the ThinManager ROI Calculator, visit here.

To read about successful ThinManager Platform deployments, visit here.

To see when the next ThinManager 2-Day Training Session is being offered, visit here.


Looking to the Cloud in 2013

Every year as the calendar comes to an end, a new year invariably elicits statements, declarations, and discussions about what kind of year it will be.  People proudly proclaim that this is they year they will lose that last ten pounds that has been hanging around, pundits make bold proclamations about the future of the political landscape, and industry professionals predict what the next wave will be to revolutionize their specific area of expertise.

For those of us who develop software, it has become clear that the prediction we need to be aware of is that 2013 will be the “boom or bust” year for all things cloud.  Then again, that was also the same prediction we heard heading into 2012.  And yet here we are again standing on the precipice of change.  No one can deny that the last year saw great advances in the world of cloud computing, specifically the proliferation of the public cloud by companies such as Amazon, Google, and Microsoft.  Previously, cloud computing had been the province of smaller niche software companies and data storage centers.  But as that business model showed gains in both popularity of adoption and profitability, the larger companies have finally committed their resources to capitalize on what has become a proven business model.

While mid-sized players in the private cloud arena such as VMware and Rackspace continue to offer greater efficiency and agility for specific end user needs, large global firms such as Oracle and HP are some of the new IaaS and SaaS players in what is a rapidly expanding landscape that is predicted to generate more than $40 billion in customer spending.  And yet, Amazon is still holding nearly 70% of that market share.

Just a few years ago, this conversation was limited to offsite data storage and accessibility of that data by remote users.  But with the rapid expansion of available cloud based web applications, file sharing platforms, and development of more such management tools by greater numbers of startups and boutique firms, the landscape has become littered with an overwhelming number of options for both early adopters of cloud technology and those looking to finally jump into the cloud.

After years of dealing with such a wide array of technologies and offerings from an endless list of developers, there has been a general shift in attitude towards the endless cloud debate.  There have been too many expectations for too long, and the tipping point is fast approaching.  Modern technology has become faster and more intuitive and accessible at a moment’s notice and end users will simply not wait much longer for a unified solution.

So while 2013 might not be the year the cloud finally dominates the IT world, it should bring about a firmer and more consistent public cloud offering.  If it doesn’t, the multitude of private cloud players just may very well continue to dilute the market share of the major players and wreak havoc in an already confusing and diverse world of cloud offerings.




To review the cost savings of using the ThinManager Platform, visit our ROI Calculator here.


To read about successful ThinManager Platform deployments, visit here.


To see when the next ThinManager 2-Day Training Session is being offered, visit here.


Virtualization in an Industrial Environment (Part 1)

This is the first in a three part article focusing on the entire process of Virtualizing in an industrial environment.  While there is a lot of talk about Virtualizing and VDI, we wanted to focus on the viability and deployment of Virtualization in an industrial and manufacturing environment which would speak to the concerns and difficulties specific to this industry.

Part 1: Centralization Before Virtualization


It is everywhere and there is no escaping it.  It is written about, discussed, recommended, and deployed in offices and facilities around the world every day.  It is the iPhone of IT…you might not know why you are buying it, you just know that everyone else is using it so it must be the next great thing.  But as implementation of virtualized environments continues to become the norm rather than the exception, industry publications are reporting that up to 40% of Virtualization deployments are never completed and eventually scrapped.

Why then, is the “greatest thing since sliced bread” still being rolled out everywhere even with such a high rate of abandonment?

Simply put, companies are not asking the right questions before making the decision to go forward with radically changing the way they approach their computing environment.  In an industrial environment, where time literally IS money, there is a completely different question that must be asked and answered before considering virtualization.


Should you Centralize?

Implementation of a centralized system is the first important building block on the road to a Virtualization.  So for those operating industrial environments it is imperative to figure out if you are a good candidate for Centralization.  That determination can be made by reviewing the following criteria:


How much support do your clients require?

Regardless of the number of clients you currently have deployed, this is the single most important factor when determining if you are a good candidate for Centralization.  In a harsh or spread out environment where replacing client hardware is a common occurrence, there is a greater opportunity to realize long term savings by centralizing even when faced with an initial cash layout.  Chances are that if you have not centralized, you are allocating resources to maintain an antiquated or manual system to monitor and maintain your current processes.


Does your facility require an application driven process or a process where an application is being used to monitor your processes?

An industrial process based on application usage would almost always benefit by making the switch to a centralized system.  In a manufacturing environment, having a single end user machine hosting an important process is a recipe for disaster.  When hardware failure occurs, being able to swap out the client and access the application from the server can reduce down time on the floor from hours to minutes.  In addition, linking your clients to a centralized system can greatly reduce the costs associated with client deployment and continuing support.


Do you have a high client count?

If your operation uses a limited number of clients there might not be a realized cost benefit.  While most operations would see increased efficiency by making the change to a centralized system, the implementation could be cost prohibitive based on current hardware or environmental factors.  But the general rule of thumb is that the more clients you have, the more potential there is for savings.

Before you make the decision to Virtualize, take the time to assess if there is a ROI when centralizing.  If your specific needs can be addressed by a move to a centralized system, while reducing your reoccurring costs related to your network and digital infrastructure, then Centralization would be right for you.


Next week we will continue this series by discussing how much Virtualization is right for your company.


*For more information regarding implementation of a centralized system in your industrial facility, contact one of our experts (http://www.thinmanager.com/acp/contactus.php)



Virtualization Basics Part 4

Server Consolidation for Industrial Automation

Anyone considering virtualizing their Industrial Automation system needs to first look at the Servers and follow a similar process that one would use for standard commercial systems.  There are some special considerations for the Industrial user however, as well as some special benefits along the way.  The following are some simple steps that are needed, and some elements of guidance for the Industrial Automation user.  Use these in conjunction with other tools and processes that are available from many sources.  One good source is searchservervirtualization.com.  Using these suggestions you should be able to make your Server Virtualization and Consolidation project flow smoothly.


The first step is to collect data and this will likely take the most time.  You need to take a complete inventory of what you have now for both hardware and software.  The hardware detail needs to include the specifics for the CPU, RAM, HDD size and controller, and the make and model for the NIC(s).  Software details should include specifics regarding versions, node name requirements, license file information, database needs, and any direct ties to hardware.
In the typical Industrial Automation system, you will likely come across older non-Ethernet PLC’s, flowmeter’s, or other data and control devices that may require some special communication cards.  In some cases this may mean that it is not possible to virtualize some elements of the system. These details require noting during the data collection phase.

Another element of software data collection is the current system’s performance levels.  Make note of the current CPU and RAM specifications and utilization.  You may want to run some Performance Monitor logs over the course of a few weeks, or even months, in order to gather enough data to reliably understand the system’s needs.

The collected data should help to define the roles of the servers.  You will want to put this data into some table-like form in order to easily catalogue and sort all of the collected information.  While the processes and data collection styles used by commercial users might be helpful, the number of devices under review is typically much smaller for an Industrial Automation system.  Given the smaller system size,  a simple spreadsheet with rows and columns will likely suffice for the collection media.  For the performance data, you may want to plot and print out some trends that reflect the performance utilization for the period of data collection.


Evaluation of the available hardware and software requirements is the next step.  This means taking a close look at the hardware to determine which of the existing servers will be able to act as Hypervisor or Host systems.  These systems need to have certain performance capabilities and need to have the right individual components supported by your Hypervisor.  The CPU, HDD controller, and NIC are usually the most crucial components for support.  Your Hypervisor vendor should not only be able to provide you a list of supported hardware, but also some level of performance expectations for the particular CPU and RAM configurations.

If your servers are newer they are likely underutilized. You may be able to simply add RAM and/or storage and have a device that can host multiple Virtual Machines.  During this evaluation phase, you can determine which of your hardware boxes are usable, and what improvements they might require to get them to the proper performance level for your system needs.

The other evaluation piece is to determine software needs.  The responsible Industrial Automation software vendors out there are onboard with the virtualization trend and provide information regarding the type, speed, and number of CPU cores that allow them to work best.  However, you should take some of the performance information gathered in the data collection phase and determine the real needs of the software, plus some room for growth.  Often even the best Industrial Automation software provider will overstate the needs just to allow for the most demanding users.  The standard system is likely not taxing to the level of the vendor provided specifications.

One other element of software evaluation would be things like compatibility, dependencies, and shared space.  Does the software vendor say that it is compatible with a virtual environment?  Are the dependent software pieces such as a database, third-party driver, and third-party software capable and supported in a virtual environment?  By reading the provided literature from the vendor, or discussing your plans with their sales or support personnel, you can answer these questions.

Is it possible to put multiple pieces of software in the same Virtual Machine?  Trying to put more software into one machine is one way to increase a physical server’s utilization. This will also help your virtual server consolidation just as much.  It might also help to reduce overall complexity in the system.  In other words, as you review the entire system, do not simply try to replace individual physical machines with virtual machines.  Give the system a good overall review and use the data collected to evaluate everything in order to provide a clean and well performing system.


After getting all the information from the data collection phase, you should now be able to evaluate your current resources and system software needs.  You can now begin to build out the design of the system where you assign roles to the physical hardware, construct the setup of the individual virtual machines, and apportion the various virtual machines to specific physical machines.  The amount of work and knowledge gained prior to this step will greatly affect the amount of effort it takes to complete.  In other words, if you have gathered all the data and evaluated it properly, this step will be a very short one and could result in a one or two page document.

Your result will be a system drawing, a list of VMs to be created, the software that will be installed on the VMs, some node name and address information, a list of things that will not be virtualized, and a list of leftover or backup hardware.


Implementation involves the configuration of the hardware, the actual creation of the VMs, and bringing everything online.  As with any normal Industrial System change process, you will build this outside of your existing production environment, test it, then plan your actual cutover to the new system.  The actual steps here will be unique to your type of Industrial environment and its production requirements.

One thing to keep in mind is that once you have a virtualized system, recreating your system in a test bed type setup will be much easier.  Because the needs for the test bed are less than production, you can use a smaller number of Hypervisor devices in order to keep a running copy of the system somewhere else.

Imagine now that you need to make some change to the system.  Rather than have the old multi-machine system, or a non-complete system for testing changes, you can easily have a complete replica of what is on the plant floor.  This can be handy for any plant engineer, system integrator, or vendor when working on a support issue.  Rather than shipping around X number of physical machines, you simply ship portable hard drives with VMs on them and some basic network configuration information—it is just one more benefit to the Virtualization process.


Other than basic information on looking at what your HDD capacity and controllers are, storage needs are not discussed here.  There are several ways to implement storage in your virtual system with a wide range of capabilities, cost, and complexity.  If your system is small, I would recommend keeping it simple, and just using local storage on the Hypervisor.  If your system is larger, has a greater need for backup and recovery features, and you can afford it, you can look at some of the Fibre Channel (FC) and other network consolidated storage options.

While not discussed in this article from ACP, home of ThinManager, our recommendation is to make one or more of your virtualized servers a Terminal Server, then use those to feed your client stations.  Whether you use Terminal Services or just Virtual Workstations, ACP’s ThinManager platform will go a long way to making management of your system much easier and more feature rich than any other software solution available.

By David Gardner