Jollibee #ChickenSad: A costly IT problem

Calen Martin Legaspi

This is AI generated summarization, which may have errors. For context, always refer to the full article.

Jollibee #ChickenSad: A costly IT problem
Jollibee is losing millions of pesos a day due to an IT problem that forced some of its stores to close. Here are the possible causes of the problem and the lessons we can learn from it

Last week, Jollibee Foods Corporation announced that a major IT system change it undertook was to blame for the lack of the popular “Chickenjoy” in some of its stores. The change affected the fastfood giant’s inventory and delivery system, forcing 72 of its stores to close.

The brand has taken a hit: aside from its loyal customers taking their disappointment to social media, Jollibee has lost 6% of its sales at least for the first 7 days of August due to the problem. Using Jollibee’s 2013 revenue, that amounts to P92 million. This is on top of the P500 million ($11.37 million) that the company supposedly shelled out for its new IT system. (Editor’s note: Other reports say Jollibee stands to lose some P180 million ($4.09 million) in revenues a day)

I asked some of my friends in the industry about what could have caused Jollibee’s costly IT disaster and the lessons we could learn from it. Here’s a summary of their insights and mine.

ISSUES

1. System migration

Jollibee had been using a product from software company Oracle to manage its supply chain, which includes inventory, placing of orders and delivery of supplies to stores. Insiders said a dispute with Oracle prompted Jollibee to switch to its rival, SAP.

Now, supply-chain software products aren’t out-of-the-box that you can just install and run. These need to be customized in order to fit a company’s business processes. The customization usually takes months, if not over a year, and involves programming and configuration. Jollibee outsourced this project to a large multinational IT service provider. Jollibee’s Oracle system had been running for years, and most certainly, had huge amount of complex programming and continuous modifications over time. There must have been fragile interrelationships between these programs and configurations, making the migration to SAP a huge and risky move.

2. Staffing and expertise

The migration project was outsourced to a large multinational IT service provider, with no sizable local team handling SAP, according to members of the Philippine SAP community I was able to interview. My interviewees have never heard of that vendor taking on Philippine projects using SAP before, which is why they concluded that the vendor does not have significant SAP expertise locally.

Also, they said there was a flurry of recruiting for SAP professionals for that vendor. It was a “red flag” because it seemed the vendor was having trouble filling positions required for the project. The vendor reportedly brought in people from India and other countries, but sources said the project remained understaffed.

To assemble a large team of outsiders and have them work on a complicated project that quickly? It’s troublesome. We can assume the outsiders have not worked under a common methodology and culture. They don’t have a common understanding of standards and processes. It takes a while to learn the ropes.

3. Schedule and size

This is a half-a-billion-peso project, but it has an operating schedule of just a little over a year – from the time the recruitment activity started till the supply chain issue broke out. Many of the projects I’ve seen costing just 5% of this amount had a two-year timetable. A project of this size will require 3 to 5 years to properly implement – from inception to transition. Maybe this was just the first phase, but unfortunately for Jollibee it was already costly.

4. Testing

Testing to check if the system’s features and processes are working is one of the most overlooked aspects of IT projects. Unfortunately, most projects do this towards the end. The later the defects are found, the more expensive they are to fix.

I asked a SAP expert on how testing is done in SAP, and he replied, “You’d be surprised at what passes for unit / functional / integration testing in Oracle and SAP projects.” While the practices and tools for testing have matured over the last two decades, very few of them are properly applied to most ERP projects like Jollibee’s, according to my source. ERP or Enterprise Resource Planning is the software system for business processes.

RECOMMENDATIONS

1. Start small

The larger the IT project, the greater the chance of failure. This is because it’s difficult to accurately predict upfront the requirements, system design, and human interactions needed in a project. Stakeholders don’t really know what they want until they actually get to use a system. Engineers can’t validate their designs until they have built components to test. And the way engineering teams and business units interact during the course of a project usually has a huge impact on schedules and deliverables.

It’s better to start with a very small project, one that can be done over 6 months, with 5 people or less. The project can be presented quickly to stakeholders and used as input for succeeding changes or enhancements. Engineers will also be able to test their designs before any huge construction is done, making changes less costly. It’s important that the initial team include veterans. The team members can then be seed members of succeeding larger projects or several small projects done in parallel.

2. Testing should be core and automated

An IT project must employ Test-Driven Development, where testing is central. Basically, this approach means that tests are defined before each piece of work is started. Testing is done not just by dedicated “testers,” but by every member of the team. Automated tests are preferred over manual; rich automated testing tools have emerged over the last two decades, and many of them are free and open source.

As the system is being built, automated tests should be done on even the smallest units of the system. Since the tests are automated, they can run multiple times a day, giving the team instant feedback on defects. This results in high quality work at every step of the project.

3. Delivery must be continuous

One of the riskiest things I see organizations do time and time again is big migration to a new system. They have an announcement that says, “System X will go live by (launch date)!” When that day comes, it’s invariably a mess. People can’t get work done with the new system and the old system is gone. If they’re lucky, the old system is still around, while the new system undergoes bug fixing.

Compare this to how Google and Facebook roll out their changes. Notice that your Gmail and Facebook have new features every few weeks or months. If you don’t like a feature, there’s a button that allows you to go back to the old way of doing things. This button is Google’s and Facebook’s way of getting feedback from their users. They roll out the new feature to a set of users. If the users opt for the old feature, then Facebook and Google know they still need to improve the new feature. Then they roll it out again to another set of users. When they reach the point when few users opt for the old feature, then they know they’ve gotten the new feature right and make it a permanent part of their systems.

You can apply this to business systems. Don’t roll out your system in a big bang. Roll it out, feature by feature – every few weeks or months – to a set of users, and then get their feedback. It will be easier and safer to roll out small changes rather than large ones. Even the deployment and rollout can be automated. This will certainly be less costly for your company.

4. Be transparent

My final piece of advice: Be transparent to your client. Allow your client to monitor the progress of a project and catch problems earlier rather than later.  Provide concrete evidence, such as:

  • Regular demos. Provide your client with working software, not PowerPoint presentations. Let them try out the features of the software. Get their feedback.
  • Test reports. Automated tests run multiple times a day using centralized systems called Continuous Integration Servers. These systems give clients reports on various tests, and whether they’ve succeeded or failed. Some of these tests, known as Acceptance Tests, can be read by non-technical users so you’ll see what behavior is being added to the system, and whether the system already complies with that behavior.
  • Quality metrics. Aside from test reports, various tools can be added to the Continuous Integration Server to generate other reports. Among these reports are metrics on quality. For example, in Java, there are various tools that can check if a system contains a code that leads to bugs or logic that is too convoluted, and if a code violates standards.
  • Big visible charts. If the team works onsite, various charts can give the rest of the organization an idea of the progress of the team. Two of the popular charts are Task Boards and Burndown Charts.

– Rappler.com

 

Calen Martin Legaspi is the CEO and co-founder of Orange and Bronze Software Labs, a company that helps other companies improve their IT processes, and builds custom IT solutions. He is also a member of the Philippine Software Industry Association board.

 

($1:P43.97)

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!