Sunday, February 16, 2014

UNITED STATES UAS/UAV Regulations Limit Corporate Growth Strategies.

It's not hard to find articles about UAV's and Drones in the news recently. It has been a hot button issue on both sides of the debate between their utility in research and personal privacy. It's hard to argue that the benefits of autonomous aircraft are not significant. Commercially, the applications of point to point autonomous delivery systems are staggering. Other application include border monitoring, agricultural research, regulatory surveillance, and environmental monitoring to name a few.

The issue with this technology is that in the United States, regulations prohibit commercial entities from operating a UAV without specific certificates with some very austere regulations. The truth is that unless the operator is a university or government entity, there is little  freedom to operate in this area.

Some may say that these regulations are essential to protect the privacy of citizens that may fall victim of surveillance or that UAV's in the wrong hands could jeopardize the security of the United States itself. While these may be real concerns they hide the fact that limiting United States commercial entities access to these types of technologies also limits their growth potential. Other countries already have well established and flexible programs that allow commercial use and development of UAV platforms. Countries like Australia, New Zealand, and Brazil have very open and unrestricted policies on UAV use. Even Mexico encourages the use and development of UAV platforms.

So what message does this send United States based companies that are in a position to develop and benefit from this technology. Well, in short, it says nothing but we are behind. The US is currently developing regulations for the commercial use of UAV's and has a mandate to do so by 2015. That being the case, is that too little too late. That puts us 2-4 years behind other countries that have established UAV platforms and have already worked these systems into their business model. It also limits innovative companies that develop UAV systems to countries allowing their purchase and use.

I live in what some would describe as the greatest country in the world, yet we continually see this type of oversight that remind us of just how shortsighted our government policies can be. In truth we may have become exactly what our ancestors fought to break free of so long ago. If we want to reclaim the throne we must be more flexible in developing innovative technology regulations.



Saturday, February 15, 2014

Database Rapid Application Development Tools On the Cheap

Some time ago I explored the realm of RAD tools for building database application. It didn't take long to discover that there are very few options available and fewer yet that have the appropriate level of sophistication to build enterprise solutions.

While a number of open source RAD tools exist there are very few options with the level of maturity that would be required to risk investing time and energy to learn. My initial search turned up several contenders that looked promising. One such offering called OpenXava had quite a large following and on the surface looked very promising. After downloading the package and installing I was immediately faced with an issue. While the install was very straightforward it seemed that something was wrong with the Tomcat settings causing my connection to the database to fail. I was never able to identify the cause of the error even after several days of surfing forums. While troubleshooting, I did find many people that had successfully installed and deployed applications with the tool but even after trying a second install on another computer with Ubuntu I came up empty. It really was a shame because the concept behind OpenXava was attractive. I came to the conclusion that this tool was suffering from a deployment packaging maturity issue and moved on.

After a few more hours of research I happened across a tool called Wavemaker. Unlike OpenXava this tool came packaged with a complete Web-Based WYSIWIG editor and database schema tool. Once installed the interface seemed a bit daunting but after a few hours I became familiar with the workflow. The interface reminded me of Visual Studio and provided a comprehensive set of built in control widgets. Once a widget is dragged into the UI the properties pane allows you to set attributes to customize the look and feel along with some behavior. Digging deeper into the widget exposes the ability to add client side java script to further enhance the user interface.

One of the most remarkable features of Wavemaker is the ease of which users are able to build data views. Once your database schema is developed you can drag the database widget onto the canvas and the system automatically creates CRUD views based on one of several templates. This mechanism is also forgiving when you add or remove data table fields. All you have to do is edit the views by adding or removing the field. The data grid views provided by the system are also capable of providing drop down list edit features and data field formatting. Custom formatting can also be accomplished using java script.

Deployment of applications is also fairly seamless. There are three separate options for deployment. You can deploy directly to a tomcat server or generate a .WAR file to install using the tomcat application management tool. I have used both methods and they work equally well. There is also a phone gap build option that I have yet to explore.

I have been using this tool for about three weeks and it seems I am only scratching the surface. I wouldn't say my search for viable open source RAD tools is over, but this option certainly has provided much of the functionally and ease of use that one would expect from  a RAD tool. I do want to talk a bit about one other important aspect of any RAD tool. The ecosystem of any software development tool is incredibly important. The number of people using a given solution often speaks volumes about that tools utility. In the case of Wavemaker, there seems to be adequate following but the real benefit comes in the form of the documentation of the tool on the website.

I will continue to develop on this tool and pass along any gems or annoyances I encounter. I would love to hear what other think about this tool or any other open source RAD tools.

Thursday, February 13, 2014

Project Portfolio Management

I work for a large corporation as a system analyst with a small group of dedicated people with enormous project loads. Our group of 8 people are currently managing over 60 projects. We are constantly struggling with managing the record keeping overhead of such a high workload. We had recently began the exercise of identifying a PPM tool that would allow us to automate many of these documentation activities. What we found surprised me.

With all the off-the-shelf solutions available we found that most actually increased our overhead. Not only do most of these systems cost quite a bit, they rob you of valuable time by forcing you to input data point after data point. Now, most of the features of these off the shelf solutions are optional but the only way to get the maximum benefit (in the long run) is to input as much information as possible. Failure to do so can ultimately limit your reporting options later. And as anyone who knows these types of systems can attest that missing information is difficult if not impossible to recall.

The real issue is that traditional IT is targeted to sales, marketing, Operational Administration, and customer retention. There are very few off the shelf global solutions that tie project management workflows together. The truth is that most of these solutions set out to solve only part of the problem. What we really need is a solution that will enable us to manage projects from 3 different but closely related perspectives.

Project Management - This is relating to the project description, scope, requirements, and constraints. It is the problem description and any underlying business and functional requirements. It also has information relating to justification, budgeting, and prioritization.

Engineering - This is the collection of documents, specifications, and other data that make up the solution design. It includes schematics, engineering drawings, source code, manuals, etc. that actually describe the solution in a way that would allow it to be constructed.

Resource Management - This is the description of resource allocation in the form of talent, technology, and capital that are required to complete the project from concept to sign-off.

The landscape of project management software is rife with options that do one of these things very well. Other, less prevalent options do two of these things with at least modest competence. Sadly, There are none that do all three in a cohesive and comprehensive way.

So what do you do in light of such a glaring gap. Some would say develop your own solution. It could be argued that this is certainly untapped potential for some eager future software mogul. I would certainly entertain any candidate that did a respectable job at managing this type of project. Any Takers?

I would love to hear insight on this. Please comment responsibly!!

Wednesday, January 4, 2012

PC Based Control Systems - Bridging the Gap Between the Virtual and Real World

     Being intimately involved in industrial and laboratory automation for the better part of 15 years I have come to understand a few undeniable truths. A good developer can create well behaved software systems if he or she does a little bit of planning and adheres to that plan. The problem with PC based automation control systems is software is only half of the equation. These systems also have one or more real world components (sensors, cameras, motors, solenoids, escapements) that require software control. This all seems relatively strait forward in theory but a number of things tend to occur during the development of such systems that one might not always expect. I will try to break these issues down into the two categories timing and window of opportunity.

Timing

     Anyone that has ever created a software system that communicates to a device understands the complexities of  timing as they relate to software execution, threading, and serial and Ethernet communication. Lets start with the software. For one thing, not all operating systems are created equal. Windows in all its flavors has what is called a round-robin thread priority scheduler (real time threads). This means that you never really know when  a particular thread will receive CPU time, therefore any attempt at real time manipulation of external devices is basically a crap shoot. Fortunately, for most purposes the speed of execution and availability of multiple CPU's makes this issue moot. In most cases if you have a margin of error of more than 100ms you are probably OK. Anything less than this and you risk missing a critical real world interaction in the system. Linux, on the other hand has two real time scheduling modes. Linux allows round and  FIFO so threads are less likely to be preempted except by higher priority threads. This brings up another important distinction. Linux has 99 separate real time thread priority levels where Windows has only 16. This leads us into a discussion about operating system determinism (a very hotly debated topic in control engineering). Suffice to say that no operating system is completely deterministic; however, a system can be designed to deterministically satisfy the operating requirements of a system. Determinism can also and does apply to communications methods and protocols. It is measured in much the same way as operating system determinism as it can never be completely deterministic but can have a known and predictable rate of transfer. As many of you might have guessed this means that a system that uses an operating system and communication method are by there very nature non-deterministic. Especially when  those systems run software that dynamically creates processes and threads.

Window of Opportunity

     All this talk about timing and determinism does not bode well for our PC based control system. In fact, this type of control does have some limitations related to timing. The real problem is that when software is used to control some device and that device has to manipulate some material or part within a fixed time frame we need to ensure that the combination of the software execution, the communication, and the action happens within our window of opportunity. If the system cannot reliably and repeatably do this we simply have to devise another way.

So Lets Look at an example:

     Say we have a system that sorts bolts by size. We can determine the size of the bolt as they pass by on a conveyor using a high speed vision system. As the bolts pass by the system determines what size (A or B) the bolt is. A subsequent process will divert only the B size bolts onto another conveyor using a pusher. We have no more than 3 seconds between the inspection and diversion. The inspection produces a result in 1 second so that leaves us two seconds. This seems like a lot of time in cyberspace and it is. The twist here is that we do not know that only one bolt or result will be in queue at any given time. So now we have to introduce a sensor into our system to detect the bolt presence, read the result queue, and actuate the pusher. Say we have two bolts in queue with a space of only .4 seconds. Now our system is put to the test because we have approximately 100ms to read the sensor, 100ms to read the queue and 200ms to activate the pusher. I give the lion share of the time to the pusher because it actually has to move. Not only that but it has to reset its position from the previous bolt. In this case I would bet money that a windows OS would be very unreliable. For one, we are using polling to read the sensor which takes time and windows may not give our polling thread priority in this case so our thread stalls for 30ms. OOPS! we missed the bolt. I have seen this very thing happen. In fact, the only definitive way to remedy this situation is to augment the system with a  PLC (Programmable Logic Controller) and port as much of the logic as possible to ladder. This is not the end of the world mind you, but it can add to overall project budgets.

Learning by Bad Examples

     So what did I learn from all this. Timing really is everything and sometimes you can write bullet proof software and still come up short. In the world of real time computing as it applies to control engineering you have to pay close attention to the requirements of the system and the capability and responsiveness of each system component. I have also learned to continue to develop my knowledge of real time scheduling algorithms, thread management, and synchronization as these concepts can help optimize system timings.


Friday, December 30, 2011

.NET vs Java


     Deciding which architecture to use for software systems can be a daunting task. Organizations put less thought into the features and attributes of  a particular framework than the associated costs of that decision. Many choose Java over .NET simply because of lower initial costs but lose sight of the total lifecycle cost of the system throughout the development life cycle. Other organizations pick a particular technology because they feel it is a better fit with their current state. Both of these drivers may be inadequate in identifying the best architectural solution. in fact, many large organizations are realizing that selecting one or the other is not always the best strategy. As market conditions continue to change and information system complexities grow, the need for diversified problem solving tools may outweigh any benefit obtained from a unified framework.

Libraries

     When comparing any architectural framework, it is essential to identify what makes each one suitable for a particular purpose. Java has a considerable developer following and market share in enterprise development. Because of these two facts, the availability of open source libraries on the Java framework are astounding. These libraries include the SWT (Standard Widget Toolkit), JAI (JavaAdvanced Imaging), and JSF (Java Server Faces). There are literally thousands of open source libraries that developers can use to speed application development. This adds a considerable amount of appeal to the framework.

     The .NET framework also has some open source support and has many libraries to aid in development. Because the .NET framework is younger than Java, and due to Microsoft's insistence on maintaining the frameworks copyright, the open source offerings have not developed at the pace of Java. This is partly due to market share as well. Not to say that .NET is less capable than Java. In fact, .NET is used in far more web applications than Java due to the ASP.NET ease of use, seamless integration with web-based technologies, and support for many different programming languages including Java. In addition to web-based applications, .NET has  extensible and user friendly UI development features.

Developer Availability

     When choosing a framework technology it makes sense to quantify the availability of developers with skills and experience in that technology. Organization wants to choose an architecture that puts them in a position where skilled developers are abundant.

     Java developers are typically in higher demand. This is due, at least in part, to Java's market share. At the time of this writing, a search on careerbuilder.com resulted in 9,228 posting nationwide for Java related positions. It is difficult to determine from this if the demand for developers outpaces availability. The bureau of labor statistics predicts a 3 percent decrease in software developer demand by 2018, though demand for software engineers increases.

     A similar search on careerbuilder.com for .NET related position produced only 4,354 postings nationwide. This fairly accurately reflects the market share ratio of Java to .NET. Of course, it is impossible to determine actual developer availability from this data. Each technology has a strong following and finding skilled developers should not be a major concern in either case.

Competencies
  
     Above all else, the most important deciding factor for an architecture is capability. Does it provide the tools required to solve a given problem. As stated earlier, each technology has a wide variety of libraries available. More than any one developer could ever learn in his or her career.  Java has many more open source libraries; however, not all those libraries are created equal. Many are very old and no longer maintained. Others are poorly written from the start. The same classifications apply to the .NET libraries. Needless to say, choosing the right libraries is equally complex. Due to Java market share and open source community there are more unofficial support channels through forums, professional organizations, and blogs. Furthermore, Java runs on every machine that supports the JVM (Java Virtual Machine).

     The .NET framework has an adequate open source community that grows every day. It is also worth noting that .NET is often considered easier to learn for new developers and simplifies RAD (Rapid Application Development). The latter is a major driving factor for the switch by many organizations to .NET for enterprise systems. Start-ups tend to prefer open source technologies  due to its relatively low initial cost. Anything built with .NET requires the Microsoft Windows OS and the .NET framework; however new open source initiatives like mono are making developing and running .NET a reality on other platforms.

Cost

     Java beats .NET hand down when comparing Initial investment. It would be incredibly short sighted to base the entire decision on this. System architecture complexities have significant impact on all future projects. As the organization and information system needs grow the individual strengths and weaknesses of the underlying architecture become more apparent.

     Java's open architecture provide a very low initial cost. This is primarily why start-ups often opt for open source architectures. The use of open source technologies like java and PHP reduce software licensing investment to almost zero. It enables them to build and run application on machines running open source operating systems which further reduces  the need for start-up capital.

     Technologies based on .NET typically require a Windows OS based workstation for development and a Windows Server for web applications. Windows server installations can reach $4,000. Development tools like Visual Studio run in the $1,000-$3,000 range.

     All of these factors tend to put .NET at a disadvantage but organizations must consider the total lifecycle cost of a solution. According to the Free Republic, the average yearly salary of a Java developer is $10,000 higher. That would easily offset any initial gain outlined above.  

Longevity

     Any discussion about system architecture and technology would be incomplete without calling into question its longevity. In these two cases it may be unnecessary due to their obvious appeal and current market share. Unfortunately, or fortunately, technologies do change so a little healthy skepticism may be in order.

     Java has been in play for over 15 years and has gained an incredible following in that time. Java has perhaps billions of installations from mainframes to access cards. It has proven itself over and over again as a versatile, robust, and reliable framework. Part of that success is owed to Sun Microsystems Inc. vision in making the technology open. This fostered the innovation and adoption that made Java a success. Java's continued success is fueled by this momentum but continued marked dominance is being challenged by .NET and Oracles wavering support of Java's openness.

     Microsoft's .NET Technologies have never been and never will be open. This and other factors have prompted many analysts to give the technology a ten count over the last few years. That has certainly not stopped the frameworks steady, if not modest gains. The fact that many organizations are starting to identify .NET as the framework of choice for large enterprise systems is interesting. More than that, many large organizations are coming to the conclusion that one architecture or framework just is not enough. Maybe that should be the lesson here. Architecture choice is more about capabilities than nickels and dimes or perhaps ego.

     As an organization, or more specifically an agent for an organization, one may have to face the complexities of this decision with a very open mind. It may be impossible to identify one technology that fits every case perfectly. When initial cost, total costs, resource considerations, capabilities, and stability are all factored in the choice may be clear. In reality though, this may be much more of a guessing game for most. Organizational goals, IT strategies, business models, and current state all have a role in choosing appropriate technologies. 

Wednesday, December 28, 2011

C++ Mixed Mode Programming

As a laboratory automation specialist I often encounter projects that deviate from what most people would call standard software development. Some of the systems I contribute on are custom made from the material handling  hardware to the control systems, and especially the software. In fact, my colleagues and I spend a great deal of time overcoming some very complex problems integrating the hardware systems with software control systems. I recently had the opportunity to work on a project that required the use of a .NET assembly designed to drive a Programmable Logic Controller. This would normally be very straight forward; however, due to the systems performance requirements the control software had been designed in native C++. This left only a few possible solutions for integrating the .NET assembly.

Obviously, There are several ways to call managed code from native C++ via interop methods such as Platform Invoke or COM. The real issue with these methods is the inherent complexities with memory management on the managed code. One of the main issues involves memory management on the heap due to the CLR automatic garbage collection. Objects on the CLR heap do not maintain the same memory address throughout their entire lifetime. In these instances, the use of interior and pinning pointers are used to either relocate the pointer in the prior or prevent the relocation of the object at run time.

Another method of C++/CLI interop is to create a mixed mode application that blends managed and unmanaged C++ without the use of a COM wrapper or PInvoke. This requires no changes to the native code  but will require significant changes to the compiler directives (a search on google will provide the basic project template). In my case this is the method that was used to integrate the .NET assembly into the native application. I chose this method primarily due to my own particular needs for this project. The assembly was only needed in one function that read some inputs and manipulated some outputs based on that read. In this case the fact that the object would only have function scope did not present a significant design challenge. In fact, the only major hurdle was Microsoft Visual Studios crippled C++ CLR intellisense, but that is another story altogether.

In all interop methods there was significant data conversion considerations. The native C++ and managed C++ worlds have significant departure in most of the non-intrinsic types. Managed Strings are specifically vulnerable to problems due to the CLR's handling of string literals. The real issue is that managing pinning pointers and interior pointers will quickly become tiresome in my opinion. So unless you need a global reference to a managed class, this method may not be worth the extra work. I am interested in hearing other stories of C++ interop and how each was handled. Please comment with your experiences.