Tracking Locks in Oracle Database

A web application I was tracing had serious DB performance issues. I reviewed the AWR, found out that the IO was incredibly slow, but a weird behavior made me suspect some locks were also the reason for the poor performance.
While running JMeter, I saw a situation where no responses are received from the server, until, suddenly, everything started running – only to be stuck again.
So, how did I track locks in the database? I’ve used the following queries (of course, all require SYSDBA to run):

SELECT l.session_id||','||v.serial# sid_serial,
       l.ORACLE_USERNAME ora_user,
          0, 'None',
          1, 'Null',
          2, 'Row-S (SS)',
          3, 'Row-X (SX)',
          4, 'Share',
          5, 'S/Row-X (SSX)',
          6, 'Exclusive', 
       ) lock_mode,
       to_char(o.last_ddl_time,'') last_ddl                                                                                            
FROM dba_objects o, gv$locked_object l, v$session v                                                                                            
WHERE o.object_id = l.object_id                                                                                                                
      and l.SESSION_ID=v.sid                                                                                                                   
order by 2,3;

I use the above query to see if actual locks exist in the DB (it runs pretty fast, so I use it often).

select l1.sid, ' IS BLOCKING ', 
from v$lock l1, 
     v$lock l2 
where l1.block =1 and 
      l2.request > 0 and 
      l1.id1=l2.id1  and 

I use the above query to see which session is locked (very useful to detect dead-locks)

select s1.username || '@' || s1.machine 
  || ' ( SID=' || s1.sid || ' )  is blocking '
  || s2.username || '@' || s2.machine || ' ( SID=' 
  || s2.sid || ' ) ' AS blocking_status,
from v$lock l1, 
     v$session s1, 
     v$lock l2, 
     v$session s2,
     v$sqlarea a1, 
     v$sqlarea a2
where s1.sid=l1.sid and 
      l1.BLOCK=1 and 
      s2.sid=l2.sid and 
      l2.request > 0 and 
      l1.id1 = l2.id1 and 
      l2.id2 = l2.id2 and 
      s1.sql_hash_value = a1.hash_value and 
      s1.sql_address = a1.address and 
      s2.sql_hash_value = a2.hash_value and 
      s2.sql_address = a2.address;

The above query is my favorite (alas, it is very slow…) as it shows the actual locking SQL – and the locked SQL. If you know your code – you can easily identify which query causes the locking problem.

Updating Pentaho PRPT files to add a PreProcessor

In my previous post (see here) I mentioned that I couldn’t add a pre-processor to a Pentaho report using the report designer. So, I’ve written a short Java program that does just that.
Note that I use a neat open source library called Zip4J (you can get it here).

package com.tona.rprt;


import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.transform.Transformer;
import javax.xml.transform.TransformerFactory;
import javax.xml.transform.dom.DOMSource;

import net.lingala.zip4j.core.ZipFile;
import net.lingala.zip4j.model.ZipParameters;

import org.w3c.dom.Document;
import org.w3c.dom.Element;
import org.w3c.dom.Node;

public class Main {
	private static final String CONFIG_FILE_NAME = "layout.xml";
	public static void main(String[] args) throws Exception {
		ZipFile reportFile = new ZipFile("");

		File tempDirectory = createTempDirectory();
		String path = tempDirectory.getAbsolutePath();
		reportFile.extractFile(CONFIG_FILE_NAME, path);

		System.out.println("Extraced file to " + path);
		File updatedFile = new File(path + File.separator + CONFIG_FILE_NAME);

		// Update the file
		DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
		DocumentBuilder db = dbf.newDocumentBuilder();
		Document doc = db.parse(updatedFile);
		System.out.println("Parsed document");
		Element layoutNode = doc.getDocumentElement();
		Element preProcessorElement = doc.createElement("preprocessor");
		preProcessorElement.setAttribute("class", "");
		Node firstLayoutChild = layoutNode.getFirstChild(); 
		layoutNode.insertBefore(preProcessorElement, firstLayoutChild);
		System.out.println("Added child");

		FileWriter output = new FileWriter(updatedFile); result = new;

		TransformerFactory tf = TransformerFactory.newInstance();
		Transformer t = tf.newTransformer();
		t.transform(new DOMSource(doc), result);
		System.out.println("Updated XML file");
		ZipParameters parameters = new ZipParameters();
		reportFile.addFile(updatedFile, parameters);
		System.out.println("Update ZIP file");
		System.out.println("Removed temporary directory");
	private static File createTempDirectory() throws Exception
		    File temp = File.createTempFile("temp", Long.toString(System.nanoTime()));

		    if(!(temp.delete())) {
		        throw new Exception("Could not delete temp file: " + temp.getAbsolutePath());

		    if(!(temp.mkdir())) {
		        throw new Exception("Could not create temp directory: " + temp.getAbsolutePath());

		    return temp;

Templating with Pentaho BI

I’m trying to build multiple reports on Pentaho Report Designer, and couldn’t find a really good way to implement templating. You see – I want the report design to change when I change my template.
So, I wrote some code. The WizardProcessor class is a Pentaho class to be used by report pre processors (see here for more info on the subject), and extending it is easy.

public class TonaWizardProcessor extends WizardProcessor {

	public MasterReport performPreProcessing(MasterReport definition,
			DefaultFlowController flowController)
			throws ReportProcessingException {

		// Init header
		ReportHeader header = definition.getReportHeader();

		float pageWidth = definition.getPageDefinition().getWidth();

		// Add black background

		Element e = RectangleElementFactory.createFilledRectangle(0, 0,
				pageWidth, 100,;


		// Add report title

		// Add date
		Element dateElement = DateFieldElementFactory.createDateElement(
				"TopDateLabel", new Rectangle2D.Double(pageWidth - 100, 0, 100,
						100), Color.WHITE, ElementAlignment.RIGHT,
				new FontDefinition("Arial", 12), "-", "MMM dd, yyyy",


		// Init footer
		ReportFooter footer = definition.getReportFooter();

		// Add date
		Element footerDateElement = DateFieldElementFactory.createDateElement(
				"FooterDateLabel", new Rectangle2D.Double(0, 0, pageWidth / 2,
						14), Color.BLACK, ElementAlignment.RIGHT,
				new FontDefinition("Arial", 12), "-", "MMM dd, yyyy",



		// Add pages count

		Element footerPagesElement = TextFieldElementFactory
				.createStringElement("FooterPageLabel", new Rectangle2D.Double(
						pageWidth / 2, 0, pageWidth / 2, 14), Color.BLACK,
						new FontDefinition("Arial", 12), "-", "PageFunction0");



		// Create orange row banding
		Element[] items = definition.getItemBand().getElementArray();

		// Setting temp names for the row banding
		for (Element item : items) {

		RowBandingFunction bandingFunction = new RowBandingFunction();

		// Change all group header background to orange

		for (int i = 0; i < definition.getGroupCount(); ++i) {
			for (Element groupHeaderElement : definition.getGroup(i).getHeader().getElementArray()) {
						ElementStyleKeys.BACKGROUND_COLOR, Color.ORANGE);

		return super.performPreProcessing(definition, flowController);

	private void makeBorder(Element element) {


				new Float(1));
				new Float(1));
				ElementStyleKeys.BORDER_BOTTOM_WIDTH, new Float(1));
				ElementStyleKeys.BORDER_RIGHT_WIDTH, new Float(1));
				ElementStyleKeys.BORDER_BREAK_WIDTH, new Float(1));


				ElementStyleKeys.BORDER_BOTTOM_STYLE, BorderStyle.SOLID);
				ElementStyleKeys.BORDER_RIGHT_STYLE, BorderStyle.SOLID);
				ElementStyleKeys.BORDER_BREAK_STYLE, BorderStyle.SOLID);


After you write your class, write a simple JUnit class to test it:

public class ProcessorTest {

	public void test() {
		ResourceManager manager = new ResourceManager();
		String reportPath = "file:/home/liran/test.prpt";

		try {
			Resource res = manager.createDirectly(new URL(reportPath),
			MasterReport report = (MasterReport) res.getResource();
			report.addPreProcessor(new TonaWizardProcessor());
			File file = new File("/home/liran/output.html");
			PrintStream ps = new PrintStream(file);
			HtmlReportUtil.createStreamHTML(report, ps);
		} catch (Exception e) {


Implementing it at the report level is a bit more difficult, as I couldn't do it in the PRD (Pentaho Report Designer). But changing the layout.xml file (found inside your prpt file) is easy. Just add the following lines:

And you're done.

Note that your pre-processor class (contained in a JAR file) must be found in the REPORT_DESIGNER/lib directory – otherwise the designer won’t open your report. Same is true for the BI-SERVER (put it in biserver-ee/tomcat/lib directory).

I’m writing a Java code that will allow you to automatically patch the rprt file to use the pre-processor. I’ll post the code in a future post.

Using Oracle tkprof with JDBC thin client application

When attempting to profile a Hibernate based application, I found a statement that was incredibly slow and caused the system to basically halt for a few seconds before resuming execution. I wanted to profile it at the database level, and the best tool for the job is Oracle’s own tkprof.
The input for tkprof is a session trace file, and enabling one is a bit tricky. The reason – a JavaEE application, with multiple threads, have multiple connections, multiple database sessions, and the SQL_TRACE is on a per session level (I didn’t want to configure it for the entire database – the trace file would be totally unusable…)
So, I took the code and ran it in a standalone Java application, and enabled SQL Trace. Here’s how:

package com.tona.jdbc;

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.sql.Types;

import oracle.jdbc.OracleCallableStatement;

public class TestJDBC {
	public static void main(String[] args) throws Exception {
		Connection c = DriverManager.getConnection(JDBC_CONNECTION_STRING);
                // Set SQL_Trace to on
		Statement stmt = c.createStatement();
		stmt.executeUpdate("ALTER SESSION SET SQL_TRACE=TRUE");
                // Set SQL Trace location
		stmt = c.createStatement();
		stmt.executeUpdate("alter system set USER_DUMP_DEST='/tmp'");

		// JDBC logic comes here...


Changing the USER_DUMP_DEST parameter did not have any effect, and the SQL_TRACE was written to the default trace log directory on the server (in my case it was ./u01/app/oracle/diag/rdbms/SID/SID/trace)

Batch Update with JPA

A customer of mine has a highly scalable system, with high database load. To improve performance, we’ve decided to use Batch update. Alas – the application uses JPA, so how do we do it?

SessionFactoryImplementor sfi = SessionFactoryImplementor)entityManager.unwrap(Session.class).getSessionFactory();
//Retrieve DB connection
connect = sfi.getConnectionProvider().getConnection();
PreparedStatement ps = connect.prepareStatement("INSERT INTO temp_table values(?,?)");
for (Data p : insertCommands) {
	ps.setInt(1, p.getId());
	if (p.isWorking() != null) {
		ps.setInt(2, p.isWorking() ? 1 : 0);
	} else {
		ps.setNull(2, Types.INTEGER);

Some points to remember:

  1. Do not close the connection!!! When closing the connection, it returns to the connection pool, without commit or rollback. You’ll get some exceptions in your code, but mainly – you’re open to database locks and dead-locks, as the connection keeps all the locks that were done in the batch update. And since other threads will reuse the connection – it will never close.
  2. Do not use Statement.addBatch. Use PreparedStatement.addBatch. When I checked both solutions, the Statement one took anywhere between 30s-60s. The PreparedStatement version took <300ms…