Help – my system is overloaded with data!

By Clinton Jones on March 17, 2011

In response to a recent email from a colleague on performance problems at a customer site, I thought I would share some thoughts with you on how to remediate or avoid such problems.

My edited version of the problem goes as follows:

This customer bought Transaction on just one pilot project in order to evaluate it and see whether to roll it out elsewhere in the organization

Their first project bulk loads work orders into SAP, which then processes them via a third party scheduling tool.

They had an instance a few months ago where they had a melt down of this process in production which cost them money as a result of productivity issues, so they launched a full scale investigation into what went wrong. Some initial investigation seemed to indicate that the number of sessions opened in SAP had spun out of control and caused a system crash, which they attributed to mass loading with Transaction.

After several additional tests it appears that their solution couldn’t cope with the throughput.

An analysis of an 800 record load showed a huge spike in sessions/CPU usage/response times about 2/3 or the way through, which they think is what happened when they had the melt down.

The basic problem is that SAP systems can get overloaded and this happens principally for five reasons:

  • A lot of application server in memory table paging in and out
  • Insufficient dialog processes
  • System lock contention
  • Database over-runs
  • Weak or slow downstream systems called via synch RFC or user exit

Most of these can be avoided with the application of SAP notes and proper application and database server maintenance. That said however; some systems will never be able to cope with the data volumes that are thrown at them.

There are several different ways to manage this when using Winshuttle products:

  1. Apply Winshuttle sessions to a dedicated batch processing oriented application server or logon group that is not typically used by regular dialog users
  2. Schedule mass creates and changes for off peak hours or at time when the system is relatively quiet
  3. Avoid parallel execution of scripts that are IO intensive or which spawn many downstream sub processes or a high number of system locks, in other words; if creating materials, execute scripts serially not in parallel; if they are in parallel then attempt to execute in smaller bursts of fewer records and try to avoid overlapping material numbers or document numbers
  4. Try to performance testing against a production like environment (pre production) for base-lining and performance assessment prior to execution on production.
  5.  Enable the wait function between records in the advanced run options. (see the illustration )
  6. If chaining scripts, rather use a batch file (transaction supports command line execution) over chaining in the application and insert a sleep command between runs.
  7. Often it is secondary or downstream systems that cause lag/latency and fail and these are called via RFC or user exit in the flow logic of the DYNPRO, if these are the principal weak spot then evaluate your architecture and determine whether you should implement higher redundancy and availability on this infrastructure
  8. Set up application monitor on core business processes and initially at least measure dialog response time, set an SLA and MPT measure and establish alerts around these.


Questions or comments about this article?

Tweet @uploadsap to continue the conversation!