Quantcast
Channel: ABAP Testing and Troubleshooting
Viewing all 43 articles
Browse latest View live

Code Inspector’s Performance Checks (IIII)

$
0
0

In the previous blogs of this series (see below) the objects that were investigated with the Code Inspector were 'program-like', that is reports, function groups, or ABAP OO classes.

Code Inspector performance checks series:

  1. Table attributes check  (discussed in this blog)

But the tool can do a lot more to improve the software quality than just analyzing ABAP source code. In principle, all types of repository objects can be checked with respect to consistency of their technical properties.
 
In this blog I want to present the 'Table attributes check' of the Code Inspector, which analyzes the definition of ABAP data dictionary tables and views.

The properties of a database table or a database view are maintained within the ABAP dictionary (transaction SE11), for example on the tab strip 'Delivery and Maintenance' or in the 'Technical Settings'. Unfortunately, the ABAP dictionary does not assist you to choose table properties that are consistent and reasonable from the performance point of view.
The Code Inspector 'Table attributes check' provides messages if technical properties, buffer settings, or index settings appear to be dissonant. Please note that the Code Inspector has a simple rule set to identify such real or just would-be inconsistencies. Some of the simple rules of the 'Table attributes check' might need to be overruled by the wise developer in special cases.

Here is a short introduction into the most important technical parameters of a table or view in the data dictionary:

Delivery Class
This parameter determines, how the data in the table will behave during system installation, upgrade, client copy, and in a transport between systems. The most important delivery classes are A for application data and C for configuration data. Further delivery classes are L, G, E, S , and W. You can get more information on the delivery class within the fields' F1-help on tab strip 'Delivery and Maintenance' in transaction SE11.

Data Class
For some database platforms, this parameter determines in which physical area of the database the table will be created. The Code Inspector tool uses the data class as a categorization of the table with respect to it's data content. Since this influences some of the tools' check results, you should correctly maintain whether a table contains master data (data class APPL0), transactional data (APPL1), or configuration data (APPL2).

Size category
The size category is also maintained in the technical settings and is used to determine and reserve the initial size of a table in the database. The Code Inspector uses this parameter as an indicator for the real size of the table in production use.

Table class
This parameter determines the type of a table; the relevant ones for this check are TRANSP (transparent table), CLUSTER (cluster table), POOL (pooled table), and VIEW (general view structure).

Buffering and Buffering types
Tables that are frequently read during production use, but only rarely modified, such as those containing configuration data, should be buffered on the application server. The access to data in a buffered table is 10-50 times faster than accessing data in a non-buffered table, being read from the database cache.  In the technical settings of a table in transaction SE11, you are able to turn on the table buffering and select the buffer type (fully buffered, generically buffered, or single record buffering).

In the following diagram I depicted, how delivery class, data class, and buffering should be combined for (most of) the tables in your system. This diagram also reflects some of the rules according to which the Code Inspector 'Table attributes check' evaluates dictionary tables. As mentioned, there might be a few tables that do not fit into this scheme.  

Proposal for buffer settings consistent with data class and delivery class
Examples on how to read this diagram:

  • delivery class C (customizing table) should not be combined with data class APPL1 (transactional data)
  • a table that contains configuration data (APPL2), and which is assigned to delivery class S (system table), should be buffered
  • a master data table (APPL0) with delivery class A (application table) should not be buffered, except if it has only a few entries (size category 0, or single record buffered with size category 0, 1, or 2)

Indexes

Frequently executed database statements must be supported by a database index to allow a fast search of the required entries. There are some rough rules for database indexes: for example, there should not be too many indexes defined for one table. Also, two different indexes of one table should not have too many common fields.

Messages of the check

In the following I want to present the messages of the 'Table attributes check' and give you some hints what can be done to improve.

Messages 0001, 0002: No Table Class or Delivery Class Chosen; No Data Class or Size Category Chosen
Maintain these table categorizations. They are used by the Code Inspector to evaluate database accesses.

Messages 0003, 0004: "Buffering Permitted, but switched Off" Selected
From a performance point of view, this buffer setting should be omitted. You as developer should know whether some table is suited for buffering or not. Only if a table might be small at customer A and very large at customer B, this setting makes sense and should be selected.

Message 0010: Buffering Type is Initial but Delivery Class Is "C", "G", "E" or "S"
According to it's delivery class, the reported table is a configuration table (C or G), or a system table (delivery class E or S). Such tables are normally small and get rarely changed. Therefore they should be buffered. If the reported table does not contain configuration data or system data, you may have assigned the wrong delivery class to the table.

Message 0011: Buffering is Activated but Delivery Class Is "A" or "L"
According to it's delivery class, the reported table contains application data (A), that is master or transactional data, or it is a table with temporary data (delivery class L). In both cases it is expected that the table has many entries and is changed frequently. Normally, such a table should not be buffered. Eceptions are small master data tables, which might also be buffered. If the reported table does not contain the described type of data, the delivery class might be wrongly assigned.

Message 0012: Buffering is Activated but Size Category Is > 2
Very large tables should not be buffered, even if the other preconditions for buffering are fulfilled. One effect of buffering large tables is that numerous small buffered tables might be displaced from the SAP table buffer. If only a few entries of a large table are needed during program execution, it can be reasonable to choose single record buffering or a fine granular generic buffering for this table. If the reported table has only a few entries in production use, and buffering would be appropriate, you might have assigned a wrong size category to the table.

Messages 0013, 0014: Buffering is Initial but Data Class Is "APPL2"; Buffering is Activated but Data Class Is "APPL0"/"APPL1"
Data class and buffering must be consistent. Normally, tables with data class APPL2, such as configuration tables, should be buffered. Tables of data class APPL0 or APPL1, such as master data tables or transactional data tables, should not be buffered. As an exception from this rule, small master data tables may also be buffered.

Message 0015: Buffering is Activated but No Buffering Type Is Chosen
Choose an appropriate buffering type for the reported table, based on the planned accesses. Possible buffering types are fully buffered, generically buffered, and single record buffered.

Messages 0016, 0017: Buffering is permitted but table is contained in DB view db_view; Buffering is permitted but table can be changed using database view db_view
Until NetWeaver release 6.40, it was very critical if modification of a buffered table was allowed via a database view. The reason for this was, that data changes for the view did not lead to an invalidation of the table buffer - which could lead to data inconsistencies when reading data from the buffered table. Since release 7.0 the buffer is also invalidated, if there are changes for the database view.
But what about the first message - why is it bad, if there is a database view defined for a buffered table? The rational is that if some developer decided for the additional load that is implicated by activating the buffer option, most accesses to the table really should make use of the buffering. But the existence of a database view for the table indicates that also other accesses, which are not using the buffer, are planned.
Last two comments on this check: the messages are always reported for the buffered table, not for the database view. One reason is that the 'owner' of the table should have the control on how the data is accessed. She or he should be aware of alternative access strategies that might be implemented by other developers.
If the database view is also buffered, there is no message reported. But note that any invalidation of the underlying table will also invalidate the buffered view in the table buffer.

Messages 0020, 0021: Table has more than 100 (700) fields
For technical and design reasons, a database table should not have too many fields. For Business ByDesign the more strict value of 100 fields is applied, otherwise up to 700 fields are allowed.

Messages 0022, 0023: Change Log Active Despite Data Class "APPL0" or "APPL1";
Change Log Active for Large Table
The change log for a table is activated in the technical settings by selecting the flag 'Log data changes'. If the system parameter rec/client is also set for the client, a log entry is created for every change of the table. It's clear that for master data tables and for tables with transactional data (data classes APPL0 and APPL1) no change log should be created. The same is true for large tables in general (size category > 2). Writing change logs for such tables would slow down the production system.

Message 0030: Table Has Unique Secondary Index
That's just an information message with no performance relevance.
Every developer who inserts data into such a table should be aware, that the insertion of entries which are identical with respect to the secondary index will lead to a run time error 'Insert Duplicate Keys'.

Messages 0031, 0034, 0131, 0134: Table Has More Than 4 (7) Secondary Indexes Though Data Class Is "APPL0"; Table Has More Than 2 (5) Secondary Indexes Though Data Class Is "APPL1"
A database table should no have too many secondary indexes. On the one hand, every additional index increases the cost for data insertion and modification (the latter only, if fields of the secondary index are affected). On the other hand, the cost based optimizer of the database, which calculates the data access strategies, might get puzzled by too many indexes.

Message 0032: Secondary Index sec_index: More Than 4 Fields
More than four fields can be okay for an index, for example if you are planning an index-only access. But as a rule of thumb, four or less fields should be sufficient. The client field is not counted by the check.

Message 0033: Table Has Secondary Index But Is Buffered
The existence of a secondary index for a table signals, that beside buffered accesses also secondary index accesses are planned. It must be ensured that the additional costs of both buffering and the secondary index are justified by corresponding table accesses in production use. The check only reports non-unique secondary indexes, since sometimes unique secondary indexes are defined to preclude duplicates in a table with respect to certain key combinations.

Message 0035: At Least 2 Fields ("dbfld_1" and "dbfld_2") Are Included in 2 Indexes
Indexes should not have too many fields in common with other indexes. For example, it does not make sense to have a table with a (primary or secondary) index A for the fields dbfld_1 dbfld_2 and another index B for the combination dbfld_2 dbfld_1. A different story is an index on fields dbfld_1 dbfld_2 dbfld_3 - here it could be justified to have another index on fields dbfld_3 dbfld_2. This check demands the wise developer to find out what is the appropriate solution!

Message 0036: Index idx_1 contains index idx_2 (left-aligned)
If there is an index dbfld_1 dbfld_2 dbfld_3 it does not make sense to define another index with fields dbfld_1 dbfld_2.

Messages 0037, 0137: Table dbtab: Field fld_name in primary / secondary index idx_name is of the type FLOAT
When a field of type FLOAT is read from the database, rounding differences might occur during the value assignment to an ABAP data object. If such a field is part of the table's primary key, this means that a certain table entry may not be addressed from within ABAP. If a field of type FLOAT is part of a secondary index, reasonable accesses are only possible with range conditions in the WHERE-clause of the database statement for this field.

Messages 0038, 0039: Client-specific table dbtab: Secondary index idx without client field; Client-specific table dbtab: Secondary index idx does not have client as first field
Though the client field is not very selective (and indexes should contain highly selective fields), we vote for putting it at the first position of any secondary table index, since normally the client is known for all table accesses. Secondary indexes that consist of GUIDs (Globally Unique Identifier) may get along without the client field.

Message 0041: INDX-type table dbtab is buffered
So called 'INDX-type' tables have a structure similar to that of SAP system table INDX. Such tables are normally not accessed with a SELECT statement, but with the ABAP statement IMPORT FROM DATABASE. Since this statement does not make use of any buffering, it is not reasonable to buffer an INDX-type table.

Message 0042, 0043: View dbview: First field not client, although basis table dbtab is client-specific; View dbview: Client field of basis tables dbtabs not linked using join condition
Database views based on client-dependent tables should contain the client field in the join condition. Also, the first field of the database view should be the client. Otherwise it is possible to read data cross-client with the database view, which is a security-relevant issue. The performance aspect of this issue is, that without the client field, it is possible that indexes of the tables making up the view cannot be used to optimized the data access.

Messages 0050, 0051: Language-dependent table dbtab: Language is not first field (or second after the client); Language-specific table dbtab is not buffered generically with regard to language
If a table is language-dependent, most accesses to the table will use the language field, since a user only wants to see texts in the log-on language. Therefore it is a good idea, to define the language field as the first field (or as the second field after the client), to allow for index accesses with language field plus key. Since most language-dependent tables will contain configuration data, they should be buffered. To save main memory within the buffers, such a table should not be fully buffered, but generically with respect to the language. The most important accesses, which are via the language, will then be supported by the buffer.

These are a lot of rules, but I hope they will support you to get to consistent settings for your dictionary tables and views.


New ABAP Unit and Coverage Analyzer Features in SAP NetWeaver 7.0 EhP3

$
0
0

Motivation

 

ABAP Unit tests facilitate the high quality of software by specifying how the software should behave in a given scenario. ABAP Unit tests allow you to execute any modularization unit (class, method, executable program, function module) in order to verify the expected behavior or locate an error. 

Especially in test-driven development ABAP Unit tests play an important role since they get implemented before the production code and serve as a specification for the expected behavior of the software. Therefore there is a need to make it easy as possible to run large sets of ABAP Unit tests and measure the coverage of the ABAP Units on a regular basis and every time if you change your code (large sets of regular regression tests).

Before SAP NetWeaver 7.0 EhP3 you could start ABAP Unit tests from within the Repository Browser (SE80) (or SE24, SE37, SE38 transactions). But you were neither quick nor efficient with this procedure since you had to run tests for every your modularization unit (class, method, program, function module) separately:

 

 

The menu paths were also not uniform across tools (SE80, SE24, SE37, SE38).If you wanted to execute tests for a set of objects then you had to use the Code Inspector.  And again, mass testing was neither quick nor easy, since you needed to set up and carry out your unit testing in the Code Inspector (SCI), with its object sets, check variants, and so on.

Therefore you need another quicker and more convenient way to execute a large set of your ABAP Unit tests on the fly and measure the coverage of the ABAP Units at the same time. That is where the improvements of the ABAP Unit and Coverage Analyzer in the SAP NetWeaver 7.0 EhP3 go into action.

 

Automate ABAP Unit Runs With and Without Coverage

 

Now you can start your ABAP Unit tests from the Repository Browser (SE80) not only for single modularization units but also for several objects or even for a package (including sub packages). In this way you can comfortably execute all ABAP Unit tests for your development objects at one time. Just select the objects and choose the context menu “Execute->Unit Test”. The results are displayed as usually in the standard ABAP Unit Browser viewer.

 

 

In addition, you can now execute ABAP Unit tests for your modularization units and packages (including sub packages) with coverage measurement (context menu “Execute->Unit Tests with->Coverage”).

 

 

The results are displayed in the result display known from the ABAP Unit Browser (not via ABAP Unit info message).The coverage result display is set to your selected packages or modularization units.

 

 

In the same way, you can execute ABAP Unit tests with coverage from the Function Builder (SE37), ABAP Editor (SE38) or Class Builder (SE24).

 

 

Automate Regression Tests with the ABAP Unit Test Runner in the Repository Browser (SE80).

 

If you want to run your ABAP Unit tests periodically (regression tests), you can schedule your ABAP Unit tests from the Repository Browser (SE80) for the objects you select (context menu “Execute->Unit Tests with->Scheduling Job”):

 

 

The ABAP Unit Test Runner starts and you can schedule the ABAP Unit tests to run as a background job. Your selection from the repository browser is taken over, but you can flexibly specify the set of programs for which tests are to run. You can also want to exclude objects from the tests (checkbox “Exclude Selected Objects…”). This feature is quite useful in case if you are working on a large set of objects, need to run ABAP Unit tests regularly und may want to exclude some objects, which you are not currently editing, from the check.

 

 

With the ABAP Unit Test Runner you can for example enable a team to schedule a batch job which runs every night and executes all ABAP Unit tests of some selected packages and sends the errors to the team members per e-mail. The e-mail contains convenient ABAP Unit Test Report including detailed ABAP Unit Test logs and information about errors:

 

 

If you want to execute the ABAP Unit Test Runner in dialog (radiobutton “Show Results (only in Dialog)”), you can additionally measure code coverage (checkbox “With Coverage”). The measurement results are displayed as usually in the ABAP Unit result browser.

 

 

Additionally you can save ABAP Unit Test Runner parameters as variants, so that you can re-use them in the future runs. You can maintain variants under “Goto->Variants” menu, where you can get, display, delete and save them.

 

 

Another possibility to get quick access to the variants is the “Get Variant” button. You can choose a variant from the catalog and apply it for your ABAP Unit Runner run:

 

Checkpoint-Group the powerful friend of every ABAPer but…beware!

$
0
0

With this blog I’d like to share my point of view on definition and usage of checkpoint-groups. I won’t focus on configuration of checkpoints groups nor using with break-points, log-points and assertions, because these topics are widely discussed in SCN.

This is the reporting of what we (my colleague Sergio Ferrari and myself ) found in a productive system of a customer.

Once you read the official documentation and surfed thru’ the various blogs and wiki pages, you would realize the advantage of the adoption of this tool: improvements in the quality of ABAP codes and correctness of programs, etc ...

This is truth, however…with some gaps. Do you want to know why?

The issue…

I have to be honest; until I had a challenging problem with a standard program, I never used checkpoint groups even for self written ABAP applications.

During the last project I had to fix a bad behavior in a very complex standard transaction and this experience made me aware that everything would be much easier if I activated a checkpoint-group.

The problem I had was strange and had all the characteristics of a program’s bug, but instead of requesting the support of the OSS, I made sure that the origin of that issue was due to any customization as an enhancement implementation, an exit routine or even a repair to the standard.

A voice in mind start repeating: “Ok Andrea ... no problem.  Just starting from the transaction, get the custom code, activate a break point then check, using the debugger, if the custom code is the cause of the problem …”

It wasn’t so easy!

We mixed data coming from different trace tools generating an accurate list of the custom routines called during the transaction processing ; using this list we activated a lot of break-points (1 break for each custom routine) in order to check that the custom code didn’t influence the normal flow of the standard transaction...

Well, if I set a checkpoint group and added a BREAKPOINT ID statement in all the custom routines, my analysis would be very easy!

Lesson Learned

I spent a lot of time in reading ABAP code and debugging applications and I can assure, that SAP standard code makes a wide use of the checkpoints groups.

In my opinion this is one key to good, robust and efficient programming; very often we are tempted to quickly code a new functionality without looking forward…

Although we wrote one of the most beautiful ABAP program in the world, what could we use in order to analyze the program in case of problems?  

 

Checkpoint group as a standard of custom developments ;-)

Try to look forward and imagine that maybe someone else will have to solve an issue of an application written by us.…

Why don’t we start using the checkpoint groups as a standard?

 

Beware of the LOG-POINTS

As we already know, the LOG-POINTs can be used to collect the values ​​of some variables in order to allow the developers to analyze their content and solve the problems but… once activated do not forget to turn them off!!!

It can happen that, for any reason, the value of the profile parameter abap/aab_log_field_size_limit has been set to 0 (no restriction) by system administrators (see docu), checkpoints have been activated and an huge amount of collected log records was never cleaned-up.

Once LOG-POINTs are activated, log data are collected in tables SRTM_SUB and SRTM_DATAX; over time, these two tables contain so many records that transactions /nSAAB and /nSRTM became no more useful; any query on checkpoint group logs takes very long runtime and even worst they never end so that it is not possible to delete the logs.

Generally a checkpoint wouldn’t be active for such a long time and create so many entries, but this can happen; we discovered at one customer more than 40 millions of records in table SRTM_SUB.

Unfortunately SAP doesn’t provide a report which would delete such large numbers of entries in batch and the easiest solution would be that the DBA team truncates the two mentioned database tables (SRTM_DATAX and SRTM_SUB).

These tables are used only for checkpoints (trx SAAB) and won't lose any productive data when truncating these tables.

Is everything under control?

One thing that I don’t like in this tool is that once you activate a checkpoint group you must also remember to deactivate it.

Yes it’s real; SAP doesn’t provide a mechanism for automatically deactivate a checkpoint group nor a concept of time validity, especially for the LOG-POINTS.

 

1.       How could I check how many checkpoints groups are active in my system?

The first method is absolutely standard; you can get a list of all the active checkpoints groups directly from the SAAB transaction by selecting the menu: Activation> Display> ALL


However, if you need to perform this simple but useful check as a scheduled background job , I implemented a simple report and code snippet can be found here.

 

2.       How could I delete the active checkpoints groups?

All the checkpoint activation can be deleted from the SAAB transaction by selecting the menu: Activation> Delete> ALL


 

3.       How could I delete the records in the log tables?

The easiest solution would be that the DBA team truncates the SRTM_DATAX and SRTM_SUB tables , but we love abap programming so the following snippet could be useful

 

Hope this helps ;-)

ST12 - 01N_* SP1 new features

$
0
0

The Single Transaction Analysis was extended recently and has some great new features. This blog shows you what you can expect from the latest patch. An overview of the new features is presented in this blog.

Before you start:

ST12 is delivered as a part of the Service tools for Applications (ST-A/PI). SAP note 69455 describes how to get the latest version of the ST-A/PI. Watch out for the support package for the 01N_* version which is RTC (released to customer) since 21st of November.

Please note that the ST12 transaction is not officially documented / supported and only available in English language. Although it is mainly intended for use by SAP or certified Service consultants during SAP Service Sessions, you can use it for your own performance analysis. A brief description of ST12 is given in SAP note 755977 (ST12 "ABAP Trace for SAP EarlyWatch/GoingLive"). A few blogs on ST12 can be found in the ST12 WIKI in SDN:

 http://wiki.sdn.sap.com/wiki/display/ABAP/Single+Transaction+Analysis

New Features:

Let’s have a look at the new elements on the main screen of ST12 in figure 1.

 Main Screen

Figure 1: Main screen

The “Size&Duration” is now a toggle button (see bullet 1 in figure 1) which offers predefined settings. You have the choice of 1.) Default (2 MB trace file size and 1800 seconds trace duration and 2.) Large (20 MB trace file size and 4200 seconds trace duration and 3.) Max (99 MB trace file size and 4200 seconds trace duration). The filters for program and modularization units have moved to further options because they are rarely used. In the further options there is a new setting “Delete created ABAP trace files after collection” which is set to true as a default. ST12 behaves like SAT now and deletes the native trace files once they are imported to the database. And by the way, the “with internal tables” flag on the main screen is set to true as default now.

The clock type (see bullet 2 in figure 1) is new on the main screen. It is per default set to “Auto” which means ST12 does an asynchronous self-test in order to figure out whether the high resolution clock delivers reliable figures. Since this is not the case on all platforms, the setting is changed to low resolution clock automatically if the self-test found unreliable times. The icon shows the reliability of the high resolution clock. If you move the mouse over the icon you will get additional information. Right after the icon there is a hidden mouse over information regarding the overhead of the different clocks.

Statistical records are automatically selected and stored with the trace now. By default the top 20 (with respect to response time) statistical records matching the user and time frame of the trace are selected and stored. For the work process scenario the statistical records are not collected because often the job will still be running.

If the User Task (see below) scenario is used for tracing the traces are grouped in one trace analysis (see bullet 4 in figure 1). This is explained in more detail below.

The former scenarios “User” and “Task/HTTP” have now been merged in the “User/Tasks” Scenario (see bullet 5 in figure 1). The “schedule Trace” scenario is now accessible directly from the main screen (see bullet 6 in figure 1).

Now let’s have a look at the new options for trace recording and trace management.

In the “User/Tasks” scenario it is much easier to deal with a larger number of trace files. After recording is stopped, filters can be applied for trace collection. You may specify filters for specific task types, trace duration, program or function module names and so on (see bullet 1 in figure 2). This allows to only collecting those traces you are interested in. You can specify here as well if the trace files should be deleted after collection. This setting overwrites the main setting in ST12.

 New Options

Figure 2: New options for “User/Tasks”

The same filters are possible when you start the “collect external traces”. In this screen you can “combine the different files in one analysis” (see below). If no trace matches the filters, nothing will be collected.

Grouping traces into one trace analysis. This feature helps to manage and organize multiple traces that were recorded with the “User/Tasks” scenario or collected with “collect external traces” and the “combine the different files in one analysis” setting.

All traces that belong together are now grouped into one line in the main screen table control or in the full screen ALV list. The ABAP Trace and Performance Trace can be chosen when a single trace is selected in the list. The group line has to be expanded in order to choose a single trace (see bullet 2 in figure 3). The statistical records and SQL trace summary can be chosen if a single trace is selected or for the header entry (see bullet 1 in figure 3). The statistical records will always show the top 20 statistical records that have been selected. The SQL trace summary shows either the SQL trace for the server where the selected single trace was recorded or a system wide summary across all servers if the header entry was selected.

 User Tasks

Figure 3: New options for “User/Tasks”

In the full screen trace list it is now possible to change the comment of selected traces after recording (bullet 3 in figure 3). And just to remind you: comparing or merging traces is possible here as well (bullet 4 and 5 in figure 3) since the previous versions already. These options will be explained in more detailed blogs later.

Another nice feautre is the ABAP trace summary by application component (button to the rigth form bullet 5 in figure 3) which was completely new developed. Here you will see a break down of the trace in application compoonents. The details of this feature will be explained in a separate blog.

Last but not least the “Schedule Trace” Scenario:

The new option to schedule a trace for User / Tasks scenarios was added to the existing scheduled trace scenarios for background jobs and work processes (bullet 2 in figure 4). At the “From” point in time an ABAP Trace like in the “User/Tasks” scenario is started. At the “To” time the traces are finished and collected. Trace collection is also triggered when “Stop trace” is clicked.

Schedule a trace

Figure 4: Schedule a trace

In the batch job or work process scenarios schedule traces can now have so called follow up traces. You can specify how much of these follow up traces you want to have. If a trace is scheduled and ends at the specified time limit a new trace will be started for the same process until the number of specified follow up traces is reached. With this setting you can trace long running processes conveniently and schedule multiple traces that will be started sequentially one after the other. A log was added to analyze the scheduled traces. These features will be detailed in a future blog.

 Summary:

SP01 adds many features that make trace recording and management easier.

More information on ST12 can be found here:

ST12 WIKI:

http://wiki.sdn.sap.com/wiki/display/ABAP/Single+Transaction+Analysis

Books:

http://www.sap-press.com/product.cfm?account=&product=H3060

http://www.sap-press.de/1821

ST12 - Schedule Traces

$
0
0

Consultants or Support Consultants like me know this situation quite well: You are at a customer and you are asked to trace and analyze a background job which runs outside of office hours e.g. from 04:00 am to 04:30 am. This blog will show you how to have a good and non-disrupted sleep at night WHILE the job will be traced automatically by ST12.

Before you start:

ST12 is delivered as a part of the Service tools for Applications (ST-A/PI). SAP note 69455 describes how to get the latest version of the ST-A/PI. Watch out for the support package for the 01N_* version which is RTC (released to customer) since 21st of November.

Please note that the ST12 transaction is not officially documented / supported and only available in English language. Although it is mainly intended for use by SAP or certified Service consultants during SAP Service Sessions, you can use it for your own performance analysis. A brief description of ST12 is given in SAP note 755977 (ST12 "ABAP Trace for SAP EarlyWatch/GoingLive"). A few blogs on ST12 can be found in the ST12 WIKI in SDN:

http://wiki.sdn.sap.com/wiki/display/ABAP/Single+Transaction+Analysis

Schedule a trace:

Since the program we want to analyze runs outside of office hours we would like to schedule a trace that starts automatically when the program starts without manual interaction from us. Let’s see how this could be done in ST12.

Like always in ST12 we first specify the ABAP and performance trace options (see bullet 1 in figure 1) as explained Single Transaction Analysis (ST12) – getting started and ST12 - 01N_*  SP1 new features. This has to be done on the main entry screen. Then we can click on “Schedule >“ (see bullet 2 in figure 1,  before it was Menu Utilities->Schedule trace->for batch job or work process from the menu).

 Schedule a trace 1

Figure 1: Define Measurement Settings

In the next screen click on “for Background job” (see bullet 1 in figure 2). In the upper half of the screen (see bullet 2 in figure 2) you can specify selection criteria for the job like you would do in transaction SM37. You can filter for Job name, User name (The name of the SAP user who scheduled a job or job-step. This is not necessarily the user under whose authorizations the job runs), ABAP program name and Step variant name. If more than 1 job is found, a work process trace will be activated for the first matching job returned from the database.

 Schedule a trace 2

Figure 2: Schedule Trace for a background job

In the lower part of the screen we can define a time frame (see bullet 3 in figure 2) in which a matching job is searched (the default setting is the next 24 hours).

Additional settings can be applied (see bullet 4 in figure 2).

The trace duration specifies how long the trace should remain active after trace activation. This setting overwrites the setting from the ST12 main screen.

The Trace start delay specifies how long the trace activation for a running job matching the filter criteria from the upper half of the screen should be delayed.

The check interval specifies in which intervals the demon wakes up in order to select matching jobs. You can choose between 60 seconds or 10 seconds.

The #Follow-up traces specifies how many traces should start for the same job after the first trace has finished due to trace duration or size limit.

In the comment field you can add a comment in order to identify your trace later.

Technical Background:

What happens technically in the background is described here. The demon is an asynchronous RFC that runs in the time frame specified (see bullet 3 in figure 2). That RFC sleeps 10 or 60 seconds (depending on the setting made in check interval, see bullet 4 in figures 2) and selects from the view V_OP with the given filters (see bullet 2 in figure 2). The trace demon stops when the end of the time frame is reached or when a job that fulfils the criteria is found in V_OP. The first job returned will be used in case there are multiple rows. Once an active job that fulfils the criteria is found another check interval seconds is waited (during which the selected job must still fulfil the criteria), then the work process trace is started for the work process that executes this job.

Please note, that all specifics from the work process trace described ST12 - The workprocess trace apply for the scheduled trace as well. The most important topic to understand here is that trace activation is only possible if the ABAP program is active in the ABAP VM, because only then the ABAP and SQL trace can start running. If the program is not active in the ABAP VM but outside of it when the trace activation is requested in the kernel, e.g. during long running SQL statements or RFC calls, the trace will only get to status green when the control is back in the ABAP VM. This means it may happen that you miss a certain part in the trace when trace activation was not possible for a longer time because the program was e.g. busy on the database. For this scenario I’ll write another blog soon. Some tips for this situation you can find in my book (mentioned below) as well.

Monitoring Scheduled Traces:

This section describes how to monitor scheduled traces. Once a trace is scheduled the green light shows that the master trace demon is active. Additional sub demons are started per instance. For each scheduled trace the status can be monitored in the lines in the table (see bullet 1 in figure 3). The possible states are

-          Glasses - “Checking for active suitable batch job” (bullet 1 in figure 3)

-          Yellow - “ABAP trace switched on for work process” (bullet 2 in figure 3) – This is the state when the trace is active for ABAP and SQL trace but not no trace records are written yet because the control is outside of the ABAP VM.

-          Green - “ABAP trace switched on for work process” (bullet 3 in figure 3) – This is the state when ABAP and SQL trace have started recording successfully.

-          Log - “Archived / Trace collected” (bullet 4 in figure 3) – This is the state when the trace is finished and collected. The trace request is archived.

Schedule a trace 3 

Figure 3: Monitoring scheduled traces

For all states the log of the demon can be analysed by double clicking on the icon shown on bullet 5 in figure 3.

The log (see bullet 6 in figure 3) shows the detailed steps executed from the demon that match the states described above.

Summary:

This blog described how you CAN have your cake and eat it too. That is having a good sleep at night while ST12 does the tracing for you. You can start your trace analysis conveniently after having a coffee in the office the next morning.

Like always ST12 works system wide that is you don’t have to care for application servers like in SE30/SAT/ST05. A scheduled trace on application server A will find and trace a matching job on application server B.

More information on ST12 can be found here:

ST12 WIKI:

http://wiki.sdn.sap.com/wiki/display/ABAP/Single+Transaction+Analysis

Books:

http://www.sap-press.com/product.cfm?account=&product=H3060

http://www.sap-press.de/1821

Sample Code of eliminating High percentage of Identical selects.

$
0
0

Here is the Main concept of eliminating the high percentage of identical selects by local buffer.

  1. Two internal tables (A and B) are defined to buffer the result  of the “select statement “ on table bseg. Table A is for the search criteria that is existing in the database and table B is for the search criteria that is not existing in the database.

      
  2. Read internal table A, if existing, return the value. If not go to step 3.
      
  3. Read internal table B for the record that does not exist in the database, if it doesnot exist in the buffer, go to step 4.
      
  4. Make the database access and fetch the record and log the result in the two internal tables.

 

types: begin of s_bseg,

        bukrs like bseg-bukrs,

        gjahr like bseg-gjahr,

        belnr like bseg-belnr,

        buzei like bseg-buzei,

        wskto like bseg-wskto,

        shkzg like bseg-shkzg,

       end of s_bseg.

      

data: itb_bseg_found type sorted table of s_bseg with non-unique key bukrs gjahr belnr buzei,

      itb_bseg_notfound type sorted table of s_bseg with non-unique key bukrs gjahr belnr buzeiwa_bseg s_bseg.

 

   

      Read table itb_bseg_found into wa_bseg with key

bukrs = bsas-bukrs

                                  gjahr = bsas-gjahr

                                  belnr = bsas-belnr

                               buzei = bsas-buzei.

      if sy-subrc = 0.

  bseg-wskto = wa_bseg-wskto.

       bseg-shkzg = wa_bseg-shkzg.                   

      else.

      Read table itb_bseg_notfound transporting no fields with key

bukrs = bsas-bukrs

gjahr = bsas-gjahr

belnr = bsas-belnr

buzei = bsas-buzei.

     

      if sy-subrc <> 0. 

    SELECT SINGLE wskto shkzg

      FROM bseg

      INTO (bseg-wskto, bseg-shkzg)

      WHERE bukrs = bsas-bukrs AND

            gjahr = bsas-gjahr AND

            belnr = bsas-belnr AND

            buzei = bsas-buzei.

           

  wa_bseg-bukrs = bsas-bukrs.

              wa_bseg-gjahr = bsas-gjahr.

              wa_bseg-belnr = bsas-belnr.

              wa_bseg-buzei = bsas-buzei.

              wa_bseg-wskto = bseg-wskto.

              wa_bseg-shkzg = bseg-shkzg.            

             

        if sy-subrc = 0.

          insert wa_bseg into table itb_bseg_found.

        else.

          insert wa_bseg into table itb_bseg_notfound.

        endif.

    endif.

    endif.

 

Note: Change the data type of internal tables itb_bseg_found and itb_bseg_notfound to statistics if this is

called in the funcational module or form.

Welcome to comment on this topic.

ABAP performance on cluster Table

$
0
0

Recently i met a case of ABAP performance issue on cluster table.  i think blog is a good place to

share this topic.  Your comments are really appreciate.

 

Issue description and analysis

 

Database perspective:  SQL trace showed a table cluster  KOCLU:

SQL_Trace.gif

ExecutionPlan.gif

the Full table scan on table cluster KOCLU is done , which is the reason of this exepensive SQL Statement.

but why DBI ignore the values of the primary key fields as these values are already provied in the ABAP Source code.

ABAP perspective

source Code.gif

Cluster table KONV is only existing ABAP Dictionary and stored in the table cluster KOCLU on the database level.

KONV Primary key fields:         

MANDT
KNUMV
KPOSN
STUNR
ZAEHK

From the fields list above, the key field KNUMV was specified but not transfered to database side.

the reason behind this is the OR operation. After this is removed, the key fields were transfered to the

database side and the Index range scan was chosen instead of the full table scan.

The quality of an answer depends significantly on the quality of the question (or: how to ask good questions)

$
0
0

Try to remember, when was the last time you got a question and
you invested some time (and enjoyed it) to provide an answer? What were the
characteristics of the question? Was it a specific or rather unspecific question?
Was the question well prepared and provided all (most of) the details that you
needed to find the answer or did it take you several round trips to collect all
the details that you needed to provide the answer? Did you have the feeling the
person asking the question invested some time in asking the question or not?

At least for me the following is true: I invest time in finding
an answer when I have the feeling the person asking the question invested time
as well. If a question is well prepared and provides all the details I find it as
well motivating to invest time in finding an answer.

How to ask good questions:

If you have a question on ABAP coding, provide the relevant (not
more, not less)  ABAP code  and the SE30/ST12 traces you took. Some
background information on the context e.g. what are you trying to achieve, how
often the program will run and all kind of relevant figures you have (e.g. size
of an internal table, nr. of hits of a LOOP WHERE …).

If you have a question on ABAP OPEN SQL, provide the OPEN
SQL statement (ABAP) and the native SQL statement as it was sent to the DB from
ST05/ST12. Furthermore provide the KPIs from ST05 (nr. of exec, nr. of records
per exec, time per exec, avg. time per rec). The DB platform, the execution
plan and the statistics for all involved tables and all indexes for these
tables are very helpful as well.

In general: the more relevant details you provide in the
questions the better the quality of the answer can be. If you get answers that
are not very meaningful / helpful rethink your question. If you get the answer 42
you have asked the Ultimate Question of Life, The Universe, and Everything.


What’s the (practical) upper limit of indexes per table?

$
0
0

I recently delivered a service (ABAP OPEN SQL Optimization)
for a customer and the customer asked me: “What’s the upper limit of indexes
per table”. Three possible answers came immediately to my mind:

 

  1. 32767; since that was the technical limit on that DB.
  2. 42; since 42 is always a nice answer when asked for a figure.
  3. “IT DEPENDS”; since that answer is almost always correct.

 

I get this question on a regular basis, approximately once a
month. I immediately discarded 1.) since that’s written in the documentation and
a RTFM (Read The Fine Manual) would have been enough, but I thought the
customer wanted to know the practical upper limit not the technical upper
limit. So I was left with 2.) and 3.) and I decided for 2.). There were many
good old (grey haired) ABAP developers in the group so I thought it was save to
give this answer and then come to 3.).

 

This is what happened:

 

Customer: “What’s the upper limit of indexes per table”.

 

Me: “There is only one definite answer to this question: The upper limit of

indexes per table is 42!”

 

Nobody was laughing. Silence. Everybody was staring at me.
Obviously nobody knew The Hitchhiker's Guide to the Galaxy.

LEARNING: Never use 42 as an answer when people don’t know it.

Now my trouble was two-fold: Firstly I had to explain why I said 42

and next I had to explain that 42 is not the correct answer but “IT
DEPENDS”.

 

The nice thing about answer 3.) “IT DEPENDS” is that it is
almost always a correct answer. Always? No… of course it depends on the
question if “IT DEPENDS” is the correct answer… ;-) . The bad thing about this
answer is that you are supposed to explain on
what
it depends… .

 

So: The upper limit of indexes per table depends on

 

  • the type of the SQL and DML workload on the table
  • whether reading (SQL) or writing (DML) performance
    matters in your business processes
  • the columns in the indexes and the type of changes on
    these columns
  • your hardware, your cache size, your IO subsystem
  • and even more things

         

If you are really interested in indexes in detail I recommend reading
this book. In my book you will find a section on this topic as well.

 

You might be interested how many indexes per table you have
in your systems. On ORACLE you can find the top scorers with this SQL:

    

select table_name, count(*) as cnt

from dba_indexes

group by table_name

having count(*)  > 3

order by cnt desc

 

On DB6 you can find the top scoreres with this SQL:

 

select tbname, count(*) as cnt

from sysibm.sysindexes

group by tbname

having count(*)  > 3

order by cnt desc

Crossing Checkpoint Charlie in a SAAB

$
0
0

Confusing title? I am referring to the checkpoint group transaction SAAB and the third (Charlie=C=Third letter in the Latin alphabet) option of checkpoint groups, namely logpoints. ABAP offers logpoints as of NW 2004s.

 

Since the beginning of checkpoint groups in 2005 the topic has been covered in several blogs on SCN, for example

 

 

Despite the obvious advantages of checkpoint groups I get the impression that they are still not widely used in custom ABAP code. Maybe it is because developers cannot find good arguments that justify the implementation effort. Therefore this blog focuses on describing one scenario where logpoints are beneficial. I will also provide a step-by-step example of how to use logpoints.  Let me emphasize that I am not covering all functionality of checkpoint groups, please read about checkpoint groups on help.sap.com if you need this information. Also keep in mind that the ABAP coding is written with an aim of making the scenario understandable and is not an example of how to program spotless ABAP!

 

1. Scenario

Time and again I face a recurring problem. There is a critical issue in the production environment which cannot be reproduced in the development or test systems. The functional experts are blaming the developers and the developers are sure that the problem is configuration related. After all the developer can compare the code of the development, test, and production system and has not found any deviations. Since it is working fine in the development and test systems, the developer concludes that the problem must be configuration related. The functional expert, however, has compared the configuration and is sure that is just another ABAP bug that has found its way to the production system. In the meanwhile the business is stressing to get the problem fixed. The helpful developer therefore decides to have a further look at the problem. By reading the application log she can find out where the issue is approximately, but does not understand why it occurs. It would be helpful to see the values of the custom implementations and compare them with the successful values in the development system. Debugging the production system is not possible for numerous reasons.

 

2. Example

The next example is very simple and helps to show how easy it is to develop logpoints. It describes how to log input and output parameters of a method. My recommendation is that logging input and output parameters of important methods is a minimum. Developers, however, are free to add as many and complicated logs as they may wish on top of that.

 

2.1 Code

Following the theme of this blog, I have created a class with a method called BORDER_CONTROL. It has an importing table and an exporting table. The structure of the table consists of a name, description, nationality, date, and a flag indicating whether or not the person is allowed to cross the border at Checkpoint Charlie.

CCC1.jpg

 

CCC2.gif

 

For testing purposes I have created a little program calling the class method BORDER_CONTROL.

CCC3.gif

 

2.2 Activating the Checkpoint Group

In order to log the logpoints of method BORDER_CONTROL, it is necessary to activate the checkpoint group.

CCC4.jpg

 

Switch the radio button from Inactive to Log.

CCC5.jpg

 

It is possible to restrict who should trigger the log by selecting specific users. Press the User button.

CCC6.jpg

 

Press CCC7.jpg. Type in the user name.

CCC8.jpg

 

Save the checkpoint group. Upon saving the following dialog appears. Here you can restrict how long time the activation is valid. In a production environment, I recommend to activate the checkpoint group just before running a process and deactivating it immediately thereafter.

CCC9.jpg

 

Now you can run the above mentioned test program Z_CHECKPOINTS by hitting F8. A log entry has been created. It can be viewed by pressing the Log tab of the checkpoint group.

CCC10.jpg

 

By expanding the log tree it is possible to get to the log fields. In this case the importing and exporting tables of method BORDER_CONTROL.

CCC11.jpg

 

These are the values of the importing table.

CCC12.gif

 

The exporting table contains an allowed flag for Lou Gram.CCC13.gif

 

With this type of information the developer can get an overview of the data flowing in and out of custom methods, which should help identify potential issues and/or reproduce the problems in other systems.  The development effort is minimal and I hope that the use of checkpoint groups in customer ABAP code will increase.

 

In this blog I have described one reason why I use logpoints in my custom code. There are plenty of other reasons which are not covered here. How do you use checkpoint groups?

 

(For more information on Checkpoint Charlie and Peter Fechter, please visit http://en.wikipedia.org/wiki/Checkpoint_charlie)

SELECT..INTO TABLE faster than SELECT/ENDSELECT. The real reason.

$
0
0

Let me begin by asking a simple question. Say I have a database table - dbtab that contains 1000 rows of data. Say I run a SELECT/ENDSELECT query on dbtab. Now, does this statement hit the database a 1000 times?

Most beginners (and sometimes even ABAP “veterans”) would emphatically reply – YES. The purpose of this blog is to examine if this is really true.

 

Consider the following code snippet:

 

Case 1:

DATA: WA_T005 TYPE T005,
IT_T005
TYPE STANDARD TABLE OF T005.

SELECT *
FROM T005
INTO WA_T005.
  APPEND WA_T005 TO IT_T005.
ENDSELECT.

 

Now, go to ST05 (Performance Trace) transaction and activate the trace. Execute the above code snippet and then view the SQL trace of ST05. The following is a screenshot of “Detailed Trace List” view of the SQL Trace.

 

Detailed Trace List.jpg 

Fig 1: Detailed Trace List.

 

Before going any further, let me explain briefly about ST05’s SQL trace. By means of SQL trace, one can analyze all database statements that were sent from the Database to the DBI (Database Interface). In other words, every time the system hits the database and sends data to the DBI (which is a component of the work process. Therefore DBI is a part of the Application Server), it is logged in the SQL trace of ST05. So SQL trace is a crude way of knowing roughly - “How many times has the system gone from the DBI and hit the database?”.

 

So if the SELECT/ENDSELECT would have hit the database a 1000 times, we should have seen a 1000 entries with the FETCH operation in the above trace. Instead we see only 1 FETCH in the trace and all the 236 records present in the T005 table are pulled, in a single step in the FETCH operation!!

 

What actually happens under the hood is – the DBI automatically implements an optimization. The data is NOT sent from the database to the DBI, one row at a time. The data is sent from the database to the Application Server (specifically to the DBI) in “Packages”. The data in the packages is buffered in the DBI and then sent to the ABAP program. In case of a SELECT/ENDSELECT, the data is sent from the DBI to the ABAP program one row at a time. Why one row at a time? Because the ABAP program can store data in only wa_t005 (which is a work area; it can hold only a single line at a time). So it is important to realize that there are 2 steps involved– transfer of data from the database to the DBI  and the transfer of data from the DBI to the appropriate variable in the ABAP program. The transfer from database to DBI ALWAYS happens in packages (unless a SELECT..UP TO 1 ROWS or SELECT SINGLE is used).

 

Consider Case 2: If I had used the following code:

SELECT *
FROM T005
INTO TABLE IT_T005.

 

In this case also, the data is sent from the database in “packages” and buffered in the DBI. And the data is sent from the DBI to the ABAP program only after ALL the packages have been received. At this point, I have to back up for a moment and explain what exactly a package is.

 

A package is a “packet of data”, that is, a set of rows. The package size depends on the respective database platform and on whether it is a Unicode system or not. Usually, package sizes are between 8KB and 128KB and are a multiple of 8KB. So suppose, I have a database table with 100,000 records and its line width is 64 bytes. And my SELECT query is to fetch 2500 records from it. That means the data to be fetched will occupy a space of 160,000 Bytes or 160KB (2500 multiplied by 64). Assume the package size on my database platform is 32KB. So the entire data of 160KB will be transferred in 5 packets (160/32) from the database to the application server. So if the total size of records to be fetched is more than the size of a single package, the data is transferred in the form of multiple discreet packages. Now would be a good time to look back at Fig 1 where you would see columns named – “Array” and “Recs”. “Array” column represents the maximum number of records/rows that a single package may contain. “Recs” represents of records/rows transferred in that particular FETCH operation. So from Fig 1, it can be seen that the each package can accommodate 32767 rows. The total number of records in T005 is 236 and since my SELECT query has no WHERE condition, 236 records are to be fetched. The package can take in more than 236. Therefore all the data to be fetched in this query is fetched in a single package. So there is just 1 hit to the database.

 

Now coming back to case 2, the DBI waits until it receives all the data to be fetched in that query. Say there are 50,000 countries present in T005 (let’s hope the world doesn’t become so fragmented!), the DBI would wait for 2 packages to arrive from the database. And then, it would send all this data to the ABAP program (i.e. to the internal table IT_T005). In this case, the data is NOT sent 1 row at a time, from the DBI to the ABAP Program and this is because the query now has a variable that can hold multiple rows (i.e. the query has the internal table IT_T005). So in Case 2, optimization is implemented on 2 levels – at the database level, which transfers data, not in individual rows but in packets and at the ABAP program level, where data is transferred from the DBI to the program, not in single rows but in bulk.

 

In fact, this is the actual reason why SELECT..INTO TABLE.. is faster than SELECT/ENDSELECT. Very often I hear the answer – “SELECT/ENDSELECT is slower because the system hits the database many times. On the other hand, in the case of SELECT..INTO TABLE.. , the system hits the database only once”. That is not correct. In both statements, the no. of database hits would be the same (for Case 2, you may look into the SQL trace. The no. of DB hits would be 1). The correct answer is - in case of SELECT..INTO TABLE, there is an optimization at 2 levels. On the other hand, in SELECT/ENDSELECT the optimization happens at only one level.

 

 

Case 3:
SELECT *
FROM T005
INTO TABLE IT_T005_TEMP

       PACKAGE SIZE 10.

ENDSELECT.

 

In this case, the difference would be – the DBI will not wait until it receives all the packages from the database. As soon as it gets a package from the DB, it buffers it and immediately sends it to the internal table variable IT_T005_TEMP in the ABAP program. Once it gets the next package from the database, it sends the next set of data to the internal table IT_T005_TEMP. This will over write the data previously present in IT_T005_TEMP.

 

Summary:

  • An analogy may be drawn to Max Plank’s Quantum Theory here. Data is fetched from the database to the Application Server in discreet packets called “Package”, unless we use SELECT..UP TO 1 ROWS or SELECT SINGLE.
  • When a SELECT/ENDSELECT query is run on a table with 1000 records, it does NOT mean that the statement hits the database 1000 times. The number of database hits depends on the package size, the line width of the table and the number of rows to be fetched.
  • The reason why SELECT..INTO TABLE is faster than SELECT/ENDSELECT is because, there is an optimization on 2 levels (ABAP program interface and in database) rather than in just 1 level (database level).

 

References:

[1] Gahm, Hermann. ABAP Performance Tuning. SAP-Press. 1st Edition. 2009.

 

NOTE: This blog may not be expressing a very important point as this is highly theoretical stuff. Even without knowing this stuff, most ABAPers will know that SELECT..INTO TABLE is faster than SELECT/ENDSELECT. It is just that the reasoning given for the better performance of SELECT..INTO TABLE is not technically accurate. It would be nice to know the technically correct reason and that is the reason I felt this post has to be written. In fact, I started this post with a question and that question was asked to me in an interview. No surprises - I gave the wrong answer. The interviewer then explained the correct answer; I couldn't digest all that he said then and there. I had ponder on it for a while and then had to refer to the book by Hermann Gahm to clearly understand and organize my thoughts.






 






 

 

 


 



TechEd 2012: The Brand-New ABAP Test Cockpit – A New Level of ABAP Quality Assurance

$
0
0

Are you interested in the ABAP Test Cockpit but couldn't attend the corresponding SAP TechEd sessions?

 

Watch this recording of session CD101 by Ekaterina Zavozina and Boris Gebhardt to learn more about the ABAP Test Cockpit and how to use it efficiently:

 

The Brand-New ABAP Test Cockpit – A New Level of ABAP Quality Assurance

Bugs in your custom ABAP code can be quite expensive when they impact critical business processes, which is why ABAP quality assurance of custom code is receiving more and more attention in business. SAP is developing a great deal of ABAP code and they use the ABAP test cockpit as a central check tool. Why should customers not use the same ABAP check tool SAP is using? Good question. That's why SAP started a pilot project with two big customers in order to find out if the ABAP test cockpit is beneficial for custom code. The results of this pilot project were very promising and now we can proudly announce that we plan to release the ABAP test cockpit to customers. In this session, we will show you in live demos why the ABAP test cockpit is a “must have” for your custom ABAP development.

 

ABAP Unit Tests without database dependency - DAO concept

$
0
0

Hi

 

In this blog I would like to present you technique which is used for business logic testing without database dependency. It is implemented with object oriented design. Many developers complain that they cannot write too much unit tests because their reports use database and tables content may easily change.  Removing database from testing is the key factor to have successful unit tests.

 

Just to remind, a good unit tests:

  • Always give same result.
  • The order of tests is not important – each can be run independently and must work.

 

Let’s imagine that there is test that uses database:

  • Creates new row.
  • Runs business logic which reads that row.
  • Checks result .
  • Deletes the row at the end.

 

And this test can work fine. But it may not always give same result. In case if someone else will manually create row, or change/delete it during test runtime, we can have error that will interrupt our test or invalid results finally. 

 

That is why good unit tests:

  • Do not use database.
  • Do not rely on network calls or files.

 

I think that it is really bad thing to have “randomized” test failures. It means that logic of program and test is correct, but accidentally test is failing because of environment set up or other factors. We need to eliminate this, because unit test failure must notify about defect in business logic and not in testing environment.

 

Technique that I present is called dependency injection. In general we need to replace database queries with something that pretends (mocks) database. We inject new object with its new behavior to the test framework, so finally we are not using database queries – that is why it is called dependency injection.

There are many ways of doing it, like using interfaces or inheritance.

 

I want to recommend one approach that uses inheritance, because:

  • It is simple.
  • It does not require separate interface creation.
  • We only extend test code to pretend database, not influence the production code.

 

We need to make distinction between:

  • Production code – business logic executed by real program in production system. It is usually global class, report or include.
  • Testing code – used only for testing, never run in production. Test code cannot be put in the production code even if it is unused, so production global class should not have methods like set_customer_for_test_only( ).

 

There is a design template that we can use for testing database dependent logic with dependency injection. If you follow this approach, it is easy to extend production code, database queries and testing in the future.

 

 

1. Build Data Access Object (DAO) which will be single access point to database.

 

  • Create class method get_instance( ) which returns singleton instance of object.
  • Create class method set_instance( ) that makes it possible to inject mock instance if we need it.
  • Each business domain should have own DAO, like ZCL_CUSTOMER_DAO, ZCL_CONTRACT_DAO, ZCL_WORK_ORDER_DAO etc. Initially we can have one DAO for report, but if there are too many queries for different tables there, complexity increases. It is better to split it logically into separate DAO units that everyone can use, so try to make DAOs domain specific and not report oriented. Keep it simple.
  • Methods in DAO should suit our program need, especially for performance reasons. If our program reads table many times and require only single column values, then build method that returns table of that column values only. However if program reads table just few times, you can build method that returns full rows content and extract column in your program.
  • Methods in DAO are mainly database queries, but also function/bapi/objects calls that use database internally.
  • Database logic is extracted and separated from business logic.
  • There is only one access point to database queries because singleton pattern is used.

 

DEFINITION:          CLASS-DATA mo_dao_instance TYPE REF TO zcl_employee_dao.          CLASS-METHODS get_instance          RETURNING VALUE(ro_instance) TYPE REF TO zcl_employee_dao. 

IMPLEMENTATION: 

     METHOD get_instance.           IF ( mo_dao_instance IS INITIAL ).               CREATE OBJECT mo_dao_instance.           ENDIF.          ro_instance = mo_dao_instance.     ENDMETHOD.

 

 

2. Global class (production code) keeps attribute of mo_dao_instance, which is initialized in constructor.


     METHOD constructor.          me->mo_employee_dao = zcl_employee_dao=>get_instance( ).     ENDMETHOD.

 

 

3. All database operations from production code must be delegated to DAO instance.

 

  • Global class never runs direct queries on database inside own methods.
  • Instead, all queries are delegated to dao instance, for example:

 

     ls_employee    = mo_employee_dao->get_employee_by_id( i_employee_id ).     lt_employees   = mo_employee_dao->get_employees_from_department( i_department ).

 

 

4. For testing scenarios, we create new class that pretends real DAO, but has predefined results for each method.

 

  • It may be local class if we need to pretend results only for local program, or global class if we want to share it wider.
  • The class extends ZCL_EMPLOYEE_DAO, inheritance is used here.
  • I use _mock addition to the name to identify that this is mocking class (convention from Java development).

 

     CLASS lcl_employee_dao_mock DEFINITION INHERITING FROM zcl_employee_dao.

 

  • We need to redefine only methods that will be used in testing scenario.
  • However if we do not redefine some method and they are used during test, the real database access will be performed so just watch out on that.
  • Optionally all methods can be implemented with empty content (by default empty result returned from methods), then write implementation for methods that we need for test scenarios.

 

DEFINITION:     METHODS get_employee_by_id FINAL REDEFINITION.     METHODS get_employees_from_department FINAL REDEFINITION.
 FINAL REDEFINITION.

 

  • In lcl_employee_dao_mock methods implementation we create fixed values that we assume should be returned from database. We can program conditions to have different results for different input parameters.

 

METHOD get_employee_by_id.    DATA ls_employee TYPE zemployee_s.       IF ( i_employee_id = '00001' ).      rs_employee-id                = '000001'.      rs_employee-name          = 'Adam Krawczyk'.      rs_employee-age             = 29.      rs_employee-department = 'ERP_DEV'.      rs_employee-salary          = 10000.    ELSEIF ( i_employee_id = '00002' ).      rs_employee-id                = '000002'.      rs_employee-name          = 'Julia Elbow'.      rs_employee-age             = 35.      rs_employee-department = 'ERP_DEV'.      rs_employee-salary         = 15000.    ENDIF.  ENDMETHOD.                    "get_employee_by_id 

 

  • Implementing methods requires knowledge of database content. When I do development, I often take real database values found during debugging/manual queries and prepare test case. In this way, you show in code what can be actually expected from database, not fake but real possible values. That helps others to understand the logic as well.
  • We must know possible input values and expected results. Otherwise if we do not know it, how can we be sure that our production code logic works fine? Not knowing business domain and lack of testing data cannot be excuse for not having unit tests.

 

 

6. After we have everything above set up, we can easily inject mock DAO to unit tests.

 

  • In the class_setup method of Unit Test class which is run once before each tests are executed, insert mock DAO into real DAO:
DEFINITION.     CLASS-METHODS class_setup.

IMPLEMENTATION.

     METHOD class_setup.          DATA lo_employee_dao_mock TYPE REF TO lcl_employee_dao_mock.          CREATE OBJECT lo_employee_dao_mock.          ZCL_EMPLOYEE_DAO=>set_instance( lo_employee_dao_mock ).     ENDMETHOD.

 

  • And that is it. Now mock dao will be used and predefined result set is returned during all tests from our own implementation in LCL_EMPLOYEE_DAO_MOCK.
  • Initially I used to also set original instance of DAO in the tear_down method, which is run after all tests are finished. However this is not needed.
  • ABAP specification is that singleton instance defined as in point 1, works only within one session. It means that mock DAO instance will be injected to ZCL_EMPLOYEE_DAO only during Unit Tests execution. Even if Unit Tests are lasting longer, and in the same time I will run production code in parallel from new session (like new transaction or program run F8), because this is separate session, real DAO will be used there.

 

Below is the summary of all described steps, showing end to end example of production code and test code.

 

1. Types definition used in classes.

  • Let's define type that will be used in below example.
  • Structure represents basic data of employee.
  • Hashed table of employees with unique ID key.

01_types_definition.PNG

2. DAO class for employee - definition.

  • Get_instance( ) and set_instance( ) create according to described template.
  • 3 methods for database queries.

02_dao_definition.PNG

 

3. DAO class for employee - implementation.

  • get_average_age is specialized method which moves logic of average calculation to database.
  • get_employees_from_department method returns table of employees, that will be used for other statistics calculations.
  • For test purpose, zemployee_db_table is used and we assume that it contains same columns as structure.

03_dao_implementation.PNG

4. Business class - employee statistics - definition.

  • Example production code which reads employee statistics: employee data, average age of all employees and average salary in specific department.

04_employee_statistics_definition.PNG

5. Business class - employee statistics - implementation.

  • mo_employee_dao is initialized in constructor and this is access point to database for business logic.
  • No database direct access.
  • All queries to database are done through mo_employee_dao object.
  • It is simple example for demo purpose. In real life logic can be more complicated, but still only single queries are used to database, then program logic is processing results.
  • Average age is read directly from database through dao.
  • Average salary in department is calculated by program. For demonstration purpose, DAO is returning list of employees from department, then program calculates average. In reality it would be easier to program it as well in DAO as single database query.

05_employee_statistics_implementation.PNG

6. Mock DAO - definition.

  • Mock DAO extends real DAO, so it has same methods available.
  • All methods from real dao are redefined in this case.
  • FINAL REDEFINITION points that we do not want anyone to extend lcl_employee_dao_mock class methods, but as well we could use REDEFINITION keyword only.
  • In point 7 and 8 you will se different ways of implementing testing data, for demonstration purpose. In reality it is better to keep one convention in the mock DAO class.

06_dao_mock_definition.PNG

7. Mock DAO - implementation part 1.

  • One way of test data preparation.
  • There is internal table that corresponds to real database table.
  • In constructor of mock DAO we initialize table like we would prepare real database table before test.
  • In mock DAO methods, we use internal table to find results rather than real database table.

 

07_dao_mock_implementation_table.PNG

 

8. Mock DAO - implementation part 2

  • If we do not need to simulate full table content we can implement testing data directly in methods.
  • Based on input parameters conditions, we define received results.
  • It is easy to extend testing data in the future, just build own data for new input parameter designed for new test scenario.
  • Sometimes we can also hardcode database values as result of method, like in case of get_average_age.

08_dao_mock_implementation_direct.PNG

9. Unit Test class definition.

  • class_setup needed - will be run once before all tests. We need to replace real DAO with mock DAO there.
  • setup method will be run before each test. New fresh instance of object to test will be created.
  • lo_employee_statistics is the business logic object, that we want to test.
  • 3 methods tested, two of them are tested with found and not found values.

09_test_class_definition.PNG

 

10. Unit Test class implementation - part 1.

  • It is enough to replace instance in ZCL_EMPLOYEE_DAO with mock DAO instance before all tests are started.
  • This is the key point of dependency injection used.
  • Any further call during tests execution, by production code (example constructor in lo_employee_statistics) that tries to get instance of DAO by ZCL_EMPLOYEE_DAO=>GET_INSTANCE( ) will now get our fake prepared instance of MOCK DAO.
  • It is safe to inject fake DAO as this affects only current user session that will finish after tests are executed. Any other session that calls ZCL_EMPLOYEE_DAO=>GET_INSTANCE( ) will get real DAO.

10_test_class_implementation_1.PNG

 

11. Unit Test class implementation - part 2.

 

11_test_class_imlpementation_2.PNG

 

I am attaching also text file with all code from presented example so you can use it for testing.

 

I hope that after reading this blog you will see how easy it is to write unit tests even for logic that requires access to database. If it looks like much code for such simple example, believe me that it is worth to spent time and create unit tests anyhow. Even and especially complex reports need it, where simple change in the future may impact behavior and non-author is not sure if he can add new line there or not. If code is well tested, there is less chance for unexpected errors. Lately there are tools that allows you to easily execute unit tests and measure code coverage but that is another story.

 

Keep in mind that Unit Tests that skips database by pretending it are verifying business logic but not end to end program behavior. If there is error in real DAO method, in select statement for example, our tests will not discover it. That is why end user tests are important as well. But users have less to test or less probably will discover logic bug after code was already tested on unit level. Of course it is also possible to write unit tests for DAO class itself, by inserting, reading, validating results and deleting rows for example. But I mentioned at the beginning that this are not pure unit tests, but may be helpful anyhow. Just group them in category "may have randomize fail". 

 

There is one more advantage of using DAO concept. If we delegate all database operations to DAO classes in our development, they can be reused by anyone. Additionally class can be tested by F8, and single methods may be executed. In this way we can check database statements (or function methods) results that are already implemented in DAO, no need to implement temporal code or thinking how to query table or execute join statement.

 

I recommend you to use DAO concept and always extract database logic from business layer. I strongly encourage to write unit tests whenever it is possible. - try and see long term benefits.

 

Good luck!

Creating ABAP unit tests in eclipse and SE80

$
0
0

Hi,

 

I believe that writing unit tests is very important to have reliable and good quality software. I heard many times from other developers that customers are not interested in unit tests as they do not want to pay for that, but do they want to pay for even higher maintenance costs? Unit tests are not additional feature, they are part of good clean code that works. In this blog I want to present you how simple it is to create basic unit tests and what is the difference between SE80 and eclipse in this area. Even if you have already learned how to create unit tests in SE80, it may be difficult to start again in eclipse.

 

This article has few sections:

  1. Why we need unit tests
  2. Where to use unit tests
  3. Unit test class format
  4. Predefined unit test methods
  5. How to build single unit test case
  6. Unit tests in SE80
  7. Unit tests in eclipse
  8. Executing unit tests
  9. Summary

 

 

1. Why we need unit tests?

 

- They verify if code behavior is correct.

- They run fast and give quick status feedback.

- They lead to more reliable code, main bugs are found earlier.

- They lead to better and simpler code design.

- They help to consider all possible input/output values.

- They can be automated.

- In long term they reduce maintenance costs much.

 

Having just few unit tests is not a lot, but it is a good start. If every developer starts to create unit tests regularly, there will be small islands of tested code which finally will lead to larger, well tested areas.

 

 

2. Where to use unit tests?

 

Unit tests works easiest with classes. They are built inside class and are integrated part of object oriented design. However it is possible to write unit tests for function modules as well. Block of code (method, function, form) is easy to test if it is isolated - works only on input and output parameters and does not use external or global variables. That is why it becomes more difficult to test old legacy code and much easier to test new development - active usage of unit tests leads to better code design.

 

In general unit tests, as name suggests, are designed for limited scope, basic unit behavior testing. It is easy to test smallest methods but not complex logic of report. If we are not able to test full report flow, then try to test small parts of it, so we have solution tested on components level but not as a whole. For example lets have one method run_program_logic and 10 sub-methods inside. If we test all 10 sub-methods without testing main method, probably main flow logic will still work correctly, at least we will not experience problems with basic things like calculations, data conversion or format display. End to end testing must be anyhow done in user acceptance tests, not as unit tests. Testing full report in unit tests requires more time because we need to simulate many database queries etc. but at least we can try to test main business logic.

 

We cannot test directly blocks of report like INITIALIZATION, but we can implement these with object methods and then test them. For example:

 

INITIALIZATION.
lo_report->initialize_screen_fields( ).

 

For such code we can write unit tests for lo_report->initialize_screen_fields( ). Modularization is important.

 

 

3. Unit test class format

 

Unit tests are built in a class (local or global) with specific additions in definition part:

class ltcl_my_test_class

definition for testing

duration short

risk level harmless

.

 

Duration and risk level are test class attributes. Duration describes how long system accepts test run before termination, risk level allows to disable test execution in case of high risk. Good unit tests should have default values as above - run quickly with no harm to the system.

 

Usually global class will have local class for own code testing, but it is possible to use global test class if we want to share test code for more objects and reuse it outside class.

 

 

4. Predefined unit test methods

 

There are some method names that are already reserved and if we implement them, they will be automatically called by unit tests framework:

- class_setup - class method called once before all tests are run. Place for general initialization for static variables.

- class_teardown - class method called once after all tests are executed.

- setup - method called before each single test, commonly used for data preparation.

- teardown - method called after each single test.

 

All these methods are optional. I recommend to always use at least setup method to create new object for testing. Each single test case should perform steps on clear instance.

 

 

5. How to build single unit test case.

 

Single unit test is implemented as method in unit test class. We can identify 3 phases of test:

1. Initialization.

2. Code execution.

3. Results validation.

 

First two are nothing new - we need to write some code that will prepare data and run production code that we want to test, for example single method of a class. Third phase however requires additional, special methods to be called that will validate if results are correct. We call them assertion methods.

 

There are different types of assertions, they can be found as static methods in standard class cl_abap_unit_assert. Most popular assertions are:

- assert_equals - check if values are same,

- assert_initial - check if value is initial,

- assert_true - check if condition is true.

 

The goal of assertion method is to compare actual and expected value and raise error to unit test framework in case of not matching results. Important parameters are:

- act - what is the actual value retrieved from code execution (phase 2).

- exp - what is the value that we expect. Some assertions does not need it, like assert_initial or assert_true - only act value is needed.

- msg - what should be the message shown to user in case if test fails. I recommend to use it often although it is optional parameter - help others to understand what is the meaning of test case.

 

Example of method assertion:

 

    cl_Abap_Unit_Assert=>assert_Equals(      act   = lo_calculator->add(        i_num1 = 5        i_num2 = 10 )      exp   = 15                msg   = '5 + 10 must be 15'    ).

 

Example above shows case when all three test phases are written as one statement: initialization (i_num1 = 5 and i_num2 = 10), execution (lo_calculator->add) and verification (assert_equals). This flexible call is possible with object oriented approach and I recommend to use it as it saves space.

 

In general each test method should perform single scenario validation, so it is good to separate methods for different variants. As example consider test methods like:

METHODS divide_success.

METHODS divide_div_zero_exception.

METHODS divide_missing_params.

 

It is better to have 3 methods instead of one that tests all 3 cases inside. Why? If test case fails, we know from name what exactly stopped to work (let say only divide_div_zero_exception failed). In this case name of test is description not only which method was tested, but also which case of that method was run. It brings value especially if we have automated tests scheduled periodically and general overview of all tests. Of course it is also acceptable to have only one test method "divide" and test different scenarios inside - we will still see that "divide" method fails. Sometimes it is easier to have more scenarios in one test method as we do not need to copy paste same code, however in general it is better to split scenarios to different test methods and assign meaningful name.

 

 

6. Unit tests in SE80

 

It is easy to create unit tests for global class in SE80. There is a nice wizard (Utilities -> Test Classes -> Generate), that helps us to create unit tests for chosen methods of a class:

se80_wizard_methods.png

 

In the wizard we can also decide which predefined methods we want to have and what are attributes of class.

- Fixture creates setup and teardown methods.

- Class fixture creates class_setup and class_teardown methods.

- Invocation creates method execution and default initial parameters assigned.

- Default Assert Equals syntax may be also created.

I recommend to select all options to have full test class generated so we can remove later some parts if we do not need.

se80_wizard_predefined.png

 

After we go through wizard, local class with unit tests is automatically generated. If we selected "Generate Fixture" option, setup method will be created together with f_cut object which means class under test (f_ prefix although convention for object is mo_). And we see that new object instance is created in setup, so each test will have fresh instance to test.

 

Now the only thing we need to do is to update test methods content with test case scenario. In addition from source code editor we can add new test cases to the class if we want.

 

Wizard creates same names for test methods as in tested class. I think this is good approach. Normally we could add test_ prefix but there are only 30 characters available for method name.

 

 

7. Unit tests in eclipse

 

Eclipse does not have wizard for automatic tests creation. Initially I was missing that very much so I was creating unit tests from embed SAP GUI and then updating tests code again in eclipse. However after some time I learned how to write unit tests even more efficiently in eclipse and I do not need wizard any more. Eclipse offers dynamic and flexible templates. It is important to use them and be aware of that feature. These templates may be used for efficient code handling.

 

First difference that we see in eclipse are tabs on the bottom of source code, where we clearly see which part of class is global class (production code), which are local types or test classes. I like it, because it is easy now to find out place where to write different code.

 

bottom_tabs.PNG

 

To create class, just type "test" in "Test Class" view and press CONTROL + SPACE to see templates suggestions:

test_template.png

 

After choosing "testClass - Test class(ABAP Unit)" , initial version of test class is generated. In templates there are predefined variables which are marked with editor frame. We can jump by TAB pressing between variables and change names. Renaming single variable changes all occurrences in source code, that is convenient. In default template we need to update local test class name, test attributes and first method name. If we press enter, variables edition mode is finished and we have standard source code editor mode. Notice that default prefix for local class is "ltcl_" (local test class) and I use it as well.

 

source_template_variables.png

 

Templates are very powerful and useful. It is possible to change existing templates or create new. They can be used for any development not only related to unit tests. Templates can be modified in " ABAP templates" settings in  Window->Preferences menu:

 

templates_settings.png

 

In my example standard testClass template is extended with mo_cut object, which is by default created in setup method as I know I use this pattern often.

 

With default template we have only one test. How to add new tests now? Quite simple. Just type new line in definition section:

 

METHODS my_second_test FOR TESTING.

 

Then point mouse course to my_second_test and press CONTROL + 1 (Quick Fix) and confirm with ENTER that you want to create new method:

 

new_test_method.png

 

This will create empty implementation of your method and cursor jumps inside that method so you can immediately start to write code. Initialize variables, execute method that you want to test and write assertion. You do not need to remember syntax of assertions, again templates come with help. Just type assert and press CONTROL + SPACE and choose assertEquals template. By default assertion is written in one line, I updated my template to have each parameter in new line as it suits my needs better.

 

These small tricks with CONTROL + SPACE and CONTROL + 1 for automated code generations speed up development much and makes me working faster than in SE80. Technically it is now easier to practice Test Driven Development (TDD) in eclipse as we can write tests first and design global class by creating empty methods by CONTROL + 1 Quick Fix feature from test class.

 

Unfortunately there is no wizard which automatically generates tests for all methods as in case of SE80. On the other hand, it is now up to developer to decide which methods should be tested and it does not take much time to auto-generate methods signatures. Other option is to use wizard from embed GUI.

 

 

8. Executing unit tests.

 

CONTROL + SHIFT + F10 is the shortcut worth to remember. It runs unit tests for current object. We can also execute unit tests on package level, by right click and Execute Tests option. In eclipse there is also convenient shortcut CONTROL + SHIFT + F11 which runs unit tests with code coverage, which is useful to see how well unit tests are covering tested class functionality. In SE80 we must run it from menu - Local Test Classes -> Execute -> Execute Tests with -> Code Coverage.

 

It is possible to schedule automated tests run by:

  • code Inspector (SCI) with check variant that contains unit tests execution,
  • rs_aucv_runner program where we can specify packages/programs and automatic emails notification.

 

 

9. Summary


Unit tests are important. However it is not so easy to start creating them. Once I compared learning unit tests to trip to high mountain - it is hard to climb, you may think it is not worth to try. But if you reach the top, there are beautiful views and you do not regret decision. This is how I feel - standing on top, being happy with my unit tests as part of daily development. Unit tests make you thinking wider about code aspects, you need to consider all input and output parameters. You see wider horizon as you are on the top of that mountain.

 

It is easier to start unit tests in SE80 because there is automatic wizard. On the other hand it is more efficient to write them in eclipse as there are flexible templates and code completion. I spent more time on explaining eclipse part not because it is more complex, but it has more potential and tricks that are worth to know.

 

If you want to write more advanced unit tests without database dependency, please read also my blog:

http://scn.sap.com/community/abap/testing-and-troubleshooting/blog/2013/03/21/abap-unit-tests-without-database-dependency--dao-concept

 

Good luck with unit testing!

Unit tests for exceptions

$
0
0

Hi,

 

In this short blog I would like to explain how can we write unit test for method which throws exception. Even if it is simple case for many, I got at least one question about it and it means that hints may be useful for others.

 

As far as I know there is no assertion method or built in feature in ABAP unit test framework that would check if method throws exception. It would be a good candidate to extend framework by the way. We need to handle exception situation ourselves.

 

Normally if exception is thrown during test method execution, test is marked as failed. We need to avoid error escalation and implement test logic in the way that controls exception and verifies if it actually has occured. I propose two similar variants for that:

 

Variant 1:

If exception is not thrown, we call fail method which makes test not passing. If method that we test raises exception as expected, then fail line will never be reached and test will pass automatically:

 

        TRY.            mo_cut->raise_exception( ).             cl_abap_unit_assert=>fail( 'Exception did not occur as expected' )        CATCH cx_root.        ENDTRY.

 

Variant 2:

We introduce flag that monitors if exception occurs. It has abap_false value by default and it changes only if we enter into CATCH section:

 

     DATA l_exception_occured TYPE abap_bool VALUE abap_false.            TRY.            mo_cut->raise_exception( ).        CATCH cx_root.            l_exception_occured = abap_true.        ENDTRY.            cl_abap_unit_assert=>assert_true(          act = l_exception_occured          msg = 'Exception did not occur as expected'        ).

 

First variant is shorter but second is more self explained.

 

If you already use ABAP in eclipse, I recommend to create new template (Window -> Preferences -> ABAP Development -> Source Code Editor -> Templates), call it assert_exception and use it while unit tests creation just by typing "assert", CONTROL + SPACE + assert_exception + ENTER. That helps.

 

It is also worth to mention that there is already assertion that checks the system status code: cl_abap_unit_assert=>assert_subrc. This method is similar to assert_equals, the difference is that we can skip act parameter as it is by default sy-subrc.

 

Kind regards

Adam


Reduce run-time of batch program - Part 1

$
0
0

Most of the programs, especially reports which are generated during year-end or reports run once in every month, every quarter etc. takes lot of time.These reports will be run in background because it takes long time or sometimes automatic execution of these reports happens by Job scheduling.

There are various ways of improving the performance or rather reducing the time taken for execution, so that the report output is available ahead of time or at least within some timelines which the user or customer desires.

 

Here are some of the different ways on how to reduce the run-time of batch program, which I have come across, worked on and which would be helpful to all of you in some way or the other. This can be applied not only to the existing programs which takes lot of time, but also can be applied before creating such kind of report programs. There are two kinds of reports: standard and custom created report programs.

 

1. Standard programs:

 

When talking about the run-time of standard batch programs,  we don't have many things to control over it. As an ABAPer or developer, what can be done is to go and check for SAP notes available for the program which takes long time, as SAP would have released some notes if the program is generally run as batch program and many customers would have raised message to SAP. Implementing the SAP notes, would mostly resolve the issue or would reduce the run-time.

Sometimes, it would need some memory parameter setting or system setting in addition to the note implementation.

 

In case, if the SAP note not available, then we can raise a message to SAP after checking for the other parameters which can be configured such as memory, system parameters,etc.

 

Apart from setting or configuring of the parameters for the job, there are some things which needs to be taken care mostly related to the program logic i.e.performance of the program/job, in standard program - to check if there is any customer exit coding or BADI coding available which might need fine tuning.

 

2. Custom programs:

 

In case of custom programs, there are lot of things which as a ABAPer we can plan out before creating a report program or for tuning the existing program to reduce the run-time. Some existing programs can even be changed to run in foreground( if needed) from background if most of the below things are taken care.

There are two ways in which run-time can be reduced: one is the memory, system settings for the job/program and second is the coding part.

We have to concentrate more on the coding part, to improve the performance or reduce the run-time of the program. The report program has to be coded considering all the performance tuning techniques. Each and every performance technique plays a vital role in reducing the run-time of the program. For example, populating the internal table as below :

 

                         itab[] = itab1[].  ~ would be faster

 

                         LOOP AT itab.               ~ will take time when comparing above statement.

                          APPEND itab TO itab1.

                         ENDLOOP.

 

This will reduce some seconds depending on the total records in ITAB[]. However, writing code like this, as a whole makes a big difference and will reduce run-time of the program. Also,clearing the memory space used by the internal table at end of the perform if the data of that internal table is not needed later, will increase the memory space thereby indirectly helping in performance.

 

When it comes to performance tuning in coding, there are three things, .i.e. ABAP statements, Database statement or queries and system. Out of which ABAP and database statements can be controlled.

 

ABAP statements:

Each and every ABAP statement has a specific execution time. Thus, when coding, not only think of statement which suits requirement but we also need to analyse best statement which can be used i.e. as in my above example for populating internal table data if data type for both are same.

ABAP statements sometimes takes more time when compared to database statement when written without analysis or without knowing the impact. It is better to analyze even while writing a simple ABAP statement in a program.

 

For understanding on what is time taken for execution by different ABAP statements, login to SAP, execute program RSHOWTIM, alternatively we can use the path to navigate there, i.e. In menu bar->Environment->Examples->Performance Examples. This will be useful.

 

There are lot of artifacts, documents already available in SDN wiki, forums which might be useful to you for performance tuning.

http://wiki.sdn.sap.com/wiki/display/ABAP/ABAP+Performance+and+Tuning

 

Database statements:

 

Most of the time, when we do a run-time analysis, the percentage of  time consumed by the database hit or when pulling data from database table or updating/deleting/inserting data to the table is more. Hence proper analysis has to be done when coding queries. It is better to understand the requirements on what the program is going to do, how frequently it will run, how many records, how many tables, what kind of tables, whether the program be enhanced later etc. (Most of the programs are initially created without knowing it will be enhanced later or not).

We can take more time in analyzing on which type of query should be coded, whether going for joins like INNER JOIN,OUTER JOIN or FOR ALL ENTRIES, depending on the tables which are used, i.e. especially when using Cluster tables and views.

Sometimes FOR ALL ENTRIES takes less time when compared to using INNER JOIN, hence depending on the tables, use the proper query. Use the primary keys in where clause and in the same order it is there in table, while picking up data from tables.

Try to use standard FMs if available in place of SELECT queries, as it will not only reduce the execution time, but will also have error handling and query written in a better way.

 

There is also an option to configure the setting for every job/batch program with different parameters like priority or job class which will control the run-time of the program. This can be done for both custom and standard batch programs.

 

Existing custom programs:

For existing custom programs, if we want to reduce the run-time, then we can go for below approaches.

Do the run-time analysis in SE30 transaction, and check execution time for ABAP, database or system.After that, then we will get know which one we need to fine tune, either ABAP or database statements or anything needs to be done for system settings.Also go for performance analysis in ST05, activate trace and execute program and analyze how much time each select query of the program has taken.

 

The above approaches will also be applicable to Function modules, Class methods or Module pools.

 

Any additional information to this blog are welcome.

Basics of eCATT (Video Series) Part 1 : System Data Container

$
0
0

Preface

Inspired by video tutorials made by Thomas Jung and also open sap course (http://open.sap.com), I decided to try my hands on video tutorials. I have always liked the "seeing and learning" experience, especially, when starting out with a new technology.

 

Lesson 1 : System Data Container

 

 

Best Regards,

Gopal Nair.

Basics of eCATT (Video Series) Part 2 : Test Script Initial Creation & Testing

$
0
0

Preface

Inspired by video tutorials made by Thomas Jung and also open sap course (http://open.sap.com), I decided to try my hands on video tutorials. I have always liked the "seeing and learning" experience, especially, when starting out with a new technology.

 

Lesson 2 : Test Script Initial Creation & Testing

 

 

Best Regards,

Gopal Nair.

Basics of eCATT (Video Series) Part 3 : Test Script Recording

$
0
0

Preface

Inspired by video tutorials made by Thomas Jung and also open sap course (http://open.sap.com), I decided to try my hands on video tutorials. I have always liked the "seeing and learning" experience, especially, when starting out with a new technology.

 

Lesson 3 :Test Script Recording

 

Best Regards,

Gopal Nair.

Basics of eCATT (Video Series) Part 4 : Test Script Recording Initial Dry Run

$
0
0

Preface

Inspired by video tutorials made by Thomas Jung and also open sap course (http://open.sap.com), I decided to try my hands on video tutorials. I have always liked the "seeing and learning" experience, especially, when starting out with a new technology.

 

Lesson 4 :Test Script Recording Initial Dry Run

 

 

 

Best Regards,

Gopal Nair.

Viewing all 43 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>