Announcement

Wednesday, August 09, 2017

Characteristics of Software Technical Expert - As documented by Hemant Shah

Hemant Shah listed these characteristics of a Software Technical Expert in a mail to participants of Accelerated Technical Expert Program. I think this is a great list and forwarded this mail to various other Teams in Geometric Ltd. I am reproducing the list here as it will be great reference to everyone.

(PS - Hemant has uncanny ability to simplify complex ideas to simple core concepts. If you are in GeometricPLM/HCL Technologies, don't miss his sessions) 
 
While Nitin has been articulating the qualities which he wants to see in Techincal Experts,
Here is a list of things which has been compiled.

I understand that it might sound too much of super-human-like. Also most of the things you would already know.

This is just to document things in a single place.

Note that not all of them are unique or mutually exclusive. Some of them might be repetition of already mentioned points, with a different view point.

Technical Expert

  1. A Technical Expert should be able to design a software.
  2. A Technical Expert should be able to implement a software.
  3. A Technical Expert should have a wide-ranging experience.
  4. A Technical Expert should be the go-to person in the team for Technical queries.
  5. A Technical Expert should be able to give Technical Presentations.
  6.  A Technical Expert should be able to make, review, evaluate a Technical Proposal.
  7. A Technical Expert should be able to formulate Technical Guidelines for e.g., Technical Coding Guidelines etc.
  8. A Technical Expert should be able to evaluate a new software and give recommendations.
  9. A Technical Expert should do Technical Bench-marking.
  10. A Technical Expert should be aware of / read technical standards.
  11. A Technical Expert should be aware of / read latest things available in his/her field
  12. A Technical Expert should be member of Technical bodies. 
  13. A Technical Expert should write Technical Blogs.
  14. A Technical Expert should be able to compare and evaluate two software.
  15. A Technical Expert loves challenging problems and attempts at solving them.
  16. A Technical Expert should answer queries which are posted on Technical Forums.
  17. A Technical Expert should take part in Technical Competitions. 
  18. A Technical Expert should be a judge of a Technical Competition. 
  19. A Technical Expert should attend Technical Conferences / Events. 
  20. A Technical Expert should be a Speaker at Technical Conferences / Events. 
  21. A Technical Expert should be able to guide / mentor new comers in the organization and the project team.
  22. A Technical Expert should be able to analyse technical reports / findings.
  23. A Technical Expert should have some preliminary business sense. 
  24. A Technical Expert should be aware of the various tools available in his/her field of expertise, or should be able to find out and figure out.
  25. A Technical Expert should take part in Techathons, Hackathons. 
  26. A Technical Expert should have in-depth knowledge of a particular topic / subject. He / she should be an authority on that subject. A Technical Expert should have broad knowledge of other areas. 
  27. A Technical Expert should have broad knowledge of the various Software Development Processes
  28.  A Technical Expert should be part of the Technical Evaluation / Promotion of Individuals. 
  29. A Technical Expert should improve the efficiency of the Team on Technical
  30. A Technical Expert should share his/her knowledge across multiple domains.
  31. A Technical Expert should share technical information with others.
  32. A Technical Expert should inspire others also to become a Technical Expert.
  33. A Technical Expert should do external Technical Certifications. 
  34. A Technical Expert should be an innovator. 
  35. A Technical Expert actively contributes on Social Media. 
  36. A Technical Expert knows how to build a technical team. 
  37. A Technical Expert challenges his / her peers with technical problems.
  38.  A Technical Expert comes up with other challenging problems. 
  39. A Technical Expert should be a curious person. He / she should be an early adopter of a new technology. 
  40. A Technical Expert comes out with challenging test papers / question papers for the next generation to solve.
  41. A Technical Expert should be able to talk to, articulate things to non-Technical folks like Human Resources, Business, Finance, Sales, etc.
  42. A Technical Expert should be able to simplify complex topics so that other non-technical folks or other technical folks are able to understand and comprehend the complex topics.
  43.  A Technical Expert should be able to take Technical Interviews. 
  44. A Technical Expert should be able to sustain a dialog with other Technical Experts. 
  45. A Technical Expert should be able to teach others about the Technical Skills which he/she has acquired.
  46. 48.  A Technical Expert should be sincere, perseverance, able to express, diplomatic, great inter-personal skills, approachable. 
  47. A Technical Expert should be an avid reader.
  48. A Technical Expert is a Guru.
  49. A Technical Expert is a Final Authority on Technical matters. 
  50. A Technical Expert is a Role Model for others.


Thursday, April 13, 2017

DevOps Imperative for Enterprise Apps like PLM – Part 3

In Part 1, I wrote about how implementing shorter deployment cycle is imperative for companies like AutoX (i.e.  companies like Ford, Toyota's and Airbus) and for PLM vendors (i.e. companies like Dassault Systèmes and Siemens PLM).  And implementing DevOps practices is the way to achieve these shorter cycles.
In Part 2, I wrote about how to achieve the seemingly impossible dream of dream of major version upgrade PLM version in a Auto company in one month and minor version upgrades in a week. What features PLM vendors should add to support this kind fast deployment cycles ?
In this part, I plan to write what changes PLM Customers like AutoX (i.e. companies like Ford, Toyota, Airbus, etc) have to do in their way of working to achieve fast PLM updates.
DevOps for PLM is imperative for AutoX to reduce the maintenance, upgrade and enhancement costs of PLM at the same time taking maximum advantage of new PLM features in day-to-day work.
I am assuming you (Reader) are the AutoX company.

Point 1 : Understand that you are a 'software company' now. (whether you like it or not)


 For you it is actually more complex situation than traditional software company because you have to 'integrate' software into your own workflow. So think about how you will manage source code of your software (configuration management), compiled executables/binaries, release cycles, code integration, feature/bug life cycles, version management, deployment management etc.

Even though you are a software company, you are not probably developing your own software product and you are not a company doing projects for others. You are somewhat like a 'systems integrator'. You have your own unique set of challenges. Unfortunately software literature is usually focused on 'products' or 'projects'. There are very few references available.

Point 2 :You will have to customize the PLM and other Enterprise software for your own needs. Out of The Box (OOTB) will not give you competitive advantage that you need.


  1. These customization will be done by your own team, PLM vendor, or some third party development company.
  2. You have to integrate code from multiple sources. These code-bases may be delivered at different intervals, with different technology stacks.
  3. These code bases will have complex dependencies (sometimes circular dependencies).
  4. Compiling these code-bases and deploying them in production is a complex task.
  5. Tracking the deployment metrics and production performance is required.
'delivering' this code in production is a 'complex pipeline' of activities. Treat this as an 'assembly line' for software. DevOps is at its core about managing this 'assembly line' for software.
It is possible to apply the 'assembly line' concepts from manufacturing application (coming from Kanban, Toyoto Production System, Theory of Constraints etc etc) to this software assembly line and thereby improve the efficiency of this assembly line
  1. Think about dependencies. Identify and break circular dependencies.
  2. Treat whole program as 'system' and apply 'systems engineer' concepts to streamline workflows
  3. Apply concepts like controlling WIP, reducing batch sizes etc. The features not yet delivered to end user is 'inventory'. Features underdevelopment is 'Work In Progress' inventory. Time Boxed Sprints of SCRUM is essentially controlling WIP and reducing batch size.

So where to start ?

  1. Define configuration management tools and practices.
    1. Which configuration management tool will be used in-house. Which tools will be used by your vendors
    2. Add every customization in configuration management. (including build scripts, database schema migration scripts, deployment scripts etc etc)
    3. Define how the code-base delivered by vendor, will be merged in your configuration management tool
    4. Define configuration management practices in such a way that you can identify what is changed between version easily.
  2. Mandate that the vendor has to deliver 'automated test scripts' along with source code (and not just test results).
    1. Major bottle neck in DevOps implementations is lack of automated testing scripts.  
    2. If you need to test manually all new features/bug fixes, then the deployment cycle (i.e. your batch size of features increase a lot)
    3. Overall not having 'automated tests' reduces efficiency of DevOps implementation
  3. Define Integration Pipeline
    1. How the code will be merged ?
    2. How it will be compiled and executable created ?
    3. How it will be 'staged' on a test environment ?
    4. How automated tests will run ?
    5. How automatic deployment will happen ?
    6. Every single step in integration pipeline will be 'managed' in your configuration management system.
    7. Once code delivered by vendor (or released by your in-house team), entire integration process should take less than 1 week.
  4. Define Integration and release cadence.
    1. Make sure integration and release cycles are as short as possible.
    2. Make sure that 'deployment downtime' should be as less as possible. Use newer cloud deployment tools like creating on demand Virtual machines, using Docker containers etc. 
  5. Define a 'sane' agile change management process. 
    1. Make sure 'change management' is part of Integration pipeline.
    2. When projects/companies move from Water Fall to Agile (especially with code developed by vendor), biggest confusion is about managing 'change requests'.  
  6.  Measure everything in production
    1. Use tools like 'fluentd', 'TICK stack', ELK stack to collect metrics from production deployments.
    2. Create dashboards which show this production metrics to your team.
    3. Share the dashboards with your development team. Let them see how the applications they developed are performing in production.
    4. To facilitate this data collection in production, define design/coding practices which will push the data to these systems.
  7. In case of mixed deployment (part desktop, part server) define and implement how 'automatic' deployment/upgrade of desktop parts will be done along with server parts
    1. PLM systems require integration with CAD/CAM/CAE applications and customization of  those applications.
    2. DevOps implementation will require pushing changes in production for these applications as well. An automatic update mechanism will be of tremendous help.
    3. Building metrics and bug/crash reporting inside these customization will increase the efficiency even more.
This list is just my initial thoughts. I will keep updating it. :-)

Please share your feedback.

Tuesday, January 17, 2017

DevOps Imperative for Enterprise Apps like PLM – Part 2

In Part 1, I talked about how implementing shorter deployment cycle is imperative for companies like AutoX (i.e.  companies like Ford, Toyota's and Airbus) and for PLM vendors (i.e. companies like Dassault Systèmes and Siemens PLM).  And implementing DevOps practices is the way to achieve these shorter cycles.

My colleague Sreekanth Jayanti shared this comparison that illustrates the benefits of 'shorter deployment cycles'


Now in this part, I intend to explain how to achieve the seemly impossible dream of major version upgrade PLM version in a Auto company in one month and minor version upgrades in a week.

Lets continue with the example of AutoX (an automotive company implementing PLM) and PLMX (the PLM vendor).  To achieve this Dream, AutoX have to change its way of working and PLMX have to change its licensing model, to some extent even business model. Lets start with what changes PLMX have to do.

Usually deploying/upgrading a new version of PLMX will require
  1. Creating new version setup
  2. Re-applying all customization to newer version (e.g. change web pages, ui customization, workflow changes, upgrade plugins etc) and Test
  3. Test all existing integrations work with newer version. If they don't then fix the bugs, remove deprecated APIs etc and make it work.
  4. Upgrade the database schema
  5. Migrate the data to newer schema.
  6. Upgrade documentation etc
To achieve all these steps in a 'short cycle', PLMX has to make many changes in its way of working and licensing model.

PLMX should License the tools developed for in-house cloud deployment and upgrade to customers


For many years, PLMX have acted as if difficulties of 'deployment' and 'upgrade' are not really its problem. It is the problem of 'AutoX' (i.e. problem of customer). This thinking is now changing (but slowly than expected). Major driver for this change is 'cloud deployment' of PLMX. Now PLMX is managing their own 'production cloud deployment' and now it is facing all the deployment and upgrade problems of AutoX. Obviously PLMX is better equipped to handle these challenges and it is developing tools to simplify these tasks. AutoX (i.e. customers of PLMX) requires exactly same kind of tools. Today PLMX is not licensing these tools to their customers yet. And that is the first change PLMX has to do.

PLMX should License automated regression test suite for public interface to customers


The major driver in achieving 'shorter' deployment cycles is 'automated tests'. There is NO way AutoX can achieve One month deployment if it relies on manual regression testing. Also AutoX will not able to write completely new automated tests for every upgrade cycle. It has to 'reuse' the tests already written. It will make AutoX life lot easier if PLMX includes its 'automated tests' as part of PLMX license. AutoX can then change these tests to as per the customization that AutoX has done. When a new release of PLMX is available, AutoX has to take new set of JARs, JSPs and Unit Tests from PLMX, re-apply the customization that AutoX has done to this set and then test new version with its own customization in its own test environment.

Even better if PLMX shares automated regression test suite on a sharing platform like Github


I will dream some more and assume that PLMX has put its 'public test suite' on a sharing platform like Github. Now AutoX just 'clone' the unit tests from Github and change it to test its own customization. AutoX is now contributing its own tests (which illustrate some bugs) to this sharing platform. All customers of PLMX are now sharing automated unit tests and effectively making their 'production deployments' faster.

PLMX should develop tools for 'incremental' migration of data


PLMX is already providing some tools to manage the database schema changes. However applying these 'schema changes' to production databases is messy and time consuming. When AutoX is migrating its PLMX back-end database to new schema, invariable issues are detected and 100% data is not migrated in 'first attempt'. So incremental data migration tools are critical. 2nd attempt should just migrate the 'failed' data and should not start from scratch again.

PLMX should develop tools/recipes for cloud deployment using virtualization and containerization of its components


Today PLMX comes with an 'installer' where IT admin has to 'click' next and select various options to setup the newer version of PLMX. To some extent PLMX is now using virtual machine images for test setups. But there is no containerization yet. Chef/Puppet recipes are not available yet. Automatic provisioning and horizontal scaling of PLMX deployment is still not easily possible.

PLMX should start using scalable, distributed data stores like Hadoop, Apache Cassandra


PLMX back-end is still traditional RDBMS (e.g. Oracle database or Microsoft SQL Server). Both Oracle and Microsoft SQL server now support 'horizontal scaling/scale out/distributed database architectures'. Also open-source data stores like Hadoop, Cassandra are also providing high availability and performance. PLMX back-end should be scalable providing high availability without single point of failure.

Of course all these steps will help PLMX in its own 'cloud deployment' of PLMX application. It will take at least 3 to 5 years for PLMX to achieve all these steps. However, PLMX will need a 'marquee' customer to like AutoX to try out all these tools in a production scenario. And that is Part 3 of this series.

Sunday, January 08, 2017

DevOps Imperative for Enterprise Apps like PLM and ERP – Part 1

In dictionary, “Imperative” is defined as “of vital importance; crucial (adjective), an essential or urgent thing. (noun)

Today I find that DevOps is focus of ‘end user product companies’ (be it desktop product or a web application or mobile application).  There are almost no documented cases of using DevOps on Enterprise products like ERP, PLM products.  Enterprise product companies like SAP, Dassault Systèmes, Siemens PLM are using DevOps practices internally while develop the SAP, Enovia (PLM) or Team Center (PLM). However, Enterprises like BMW, Ford and others are not really getting the benefit of DevOps while deploying these enterprise applications.  Current situation is ‘Lose – Lose’ for both Application Vendor and Customer.  My prediction is First company who realizes it and makes its application ready for DevOps practices in Enterprise deployments, will make a ‘killing’ in the market.  This is “DevOps Imperative” that I am going to write about.

Let’s take an example of deploying a PLM software (like Enovia from Dassault Systèmes or Team Center from Siemens PLM) in a large Auto or Aero company like BMW, Ford, Airbus or Boeing. Let see what is the current status and then dream about what is possible. Then see how we can make that dream into a reality.

Current Status – Why it’s a Lose->Lose situation for Vendors and Customers:

Let’s assume Auto Company “AutoX” has PLM system “PLMX” at version 10 deployed in its plants.  Typically PLM vendors release one major new version every release. Hence Version 11 of PLMX is now available.

  1. For AutoX version upgrade is a major exercise. It may cost a millions of dollars and at least one year to upgrade the deployed version.
  2. So if AutoX wants to move to version 11.00, it will take one year and millions of dollars to upgrade. So by the time AutoX upgrades to version 11.00, Version 12.00 PLMX will be available. So AutoX is always playing catch up.  By the time AutoX finish upgrade to version 11 , its time to start for upgrade to Version 12.
  3. In the end AutoX decides to skip one version and upgrade directly to version 12.00. However, it means more difficult upgrade.
  4. Since new features available in Version 11 will not be available to AutoX, AutoX will end up creating customization for some of features available in Version 11.  When AutoX upgrades to Version 12, these customization have to be removed so that same requirements can be fulfilled by OOTB features. Many times AutoX will end up maintaining its customization and NOT using similar Out Of Box features.
  5. These customization will take time and money to develop and maintain. AutoX is essentially spending extra money for features which are available out of box.
  6. For Vendor, many customers like AutoX will not upgrade to Version 11. So Vendor will not immediately get feedback on usefulness of new features.
  7. Vendor has to support many old versions which are in production. It adds lot of cost to development efforts of the vendor.
Essentially it’s a ‘Lose Lose’ for everyone involved.

How do we convert this “Lose Lose” situation to “Win Win” ?

The upgrade is “Lose Lose” situation because of one key reason “time required to finish the upgrade”.
Now assume that you are the CIO of AutoX, close your eyes and imagine following:
  1. Vendor has released version 11 of PLMX
  2. AutoX has put the version 11 of PLMX on its DevOps ‘pre-production’ servers.
  3. Within 2-3 weeks, all the customizations and other integrated applications are migrated and tested on version 11 of PLMX
  4. In 4th week, with available DevOps migration tools (using containers, virtual machine images, private cloud setups etc etc) production schema is upgraded, production servers upgraded, and new versions of integrated applications installed etc
  5. In one month AutoX is now on version 11 of PLMX.
  6. If Vendor releases a hot fix or service pack, it is put in production within a Week.
With above assumption, the scenario will be different. What are benefits to AutoX and PLM vendor in such scenario?
  1. AutoX is able to benefit from new features immediately. AutoX will now have much better ROI for its purchase of PLMX.
  2. AutoX don’t have to develop and maintain so many in-house customization where similar functionality is available Out of Box.
  3. For AutoX now cost of upgrade will be negligible and it will become ‘routine’ exercise.
  4. Once majority of PLMX customers move to same kind setup, PLMX don’t have to spend time/money on supporting older version PLMX. It can its developers to new features and bringing more values to PLMX customers.
For many months now I am ‘dreaming about the above scenario’. Wherver I talk about this ‘dream’, the reaction I get from audience (most of them PLM veterans), “it’s impossible”.  Is it really impossible???
I urge you view/read about John Allspaw and Paul Hammond’s talk in Velocity Conference 2009 titled 10+ Deploys Per Day: Dev and Ops Cooperation at Flickr.   (Slides) At that time “10+ deploys a day” was considered Impossible. However, this seminal talk broke through that mental barrier, and after that many previously “impossible feats” were achieved in many different companies.
Following table is show the data from book “Phoenix Project” by Gene Kim about on “number of deploys per day” at few popular and successful internet companies. This data is from year 2012.

CompanyDeploy FrequencyDeploy Lead timeReliabilityCustomer responsiveness
Amazon23000/dayMinutesHighHigh
Google5,500/dayMinutesHighHigh
Netflix500/dayMinutesHighHigh
Facebook1/dayHoursHighHigh
Twitter3/weekHoursHighHigh
Typical enterpriseEvery 9 months to 1 yearMonths or QuartersLow/mediumLow/Medium

Amazon is doing 20000+ deploys a day and when I talk about 1 month to deploy a ‘new version” of PLM and people still consider that impossible. Enterprise Applications world is way, way behind in terms of DevOps practices.  Did Amazon reached 20,000+ deploys a day in few months. Obviously not. It took them 5 years to move away from their original OBIDOS content delivery system to current architecture.

If AutoX wants to go to 1 month production deployment and if they start today, probably it will take them 3 to 5 years to reach there. Obviously it will require mindset change, investment to develop the necessary tools and practices and it will require cooperation from PLM vendor.

Is that investment worth it? I em-pathetically say “YES”. I will repeat my prediction again that first PLM vendor who supports that and first company who achieves it will make a ‘killing the market’. Studies have established that There is a ‘positive’ correlation between the deploy frequency and profitability of the company. So its a great opportunity for companies like Ford, BMW, Airbus, Boeing, etc and PLM vendors like Dassault Systèmes, Siemens PLM.

In Part 2 of this article, I will explain the steps that are required from AutoX and from PLMX vendor to achieve this ‘seemingly’ impossible dream?

Readers, do you still think it is an impossible dream? I will love to hear from you. Please leave a comment.

NOTE - All the opinions expressed in this article are my personal opinions