Thursday, September 04, 2014

The curious problem of Apex login page session expiration

Always check out the original article at http://www.oraclequirks.com for latest comments, fixes and updates.

Did you ever notice that if you open an Apex page requiring authentication but you don't log in during a long time, it may happen that when you finally try, you are bounced back?

That happens because even the Apex login page obeys the session expiration rules defined either at the application level or, in lack thereof, at the instance level.

For example, if the default session duration is 3600 seconds and you don't log in within 3600 seconds, Apex won't accept your credentials and it will reload the login page instead.
But that is not the end of the story: since the session lifetime starts as soon as you load the login page, if the total lifetime is 3600 seconds and you log in after 3500 seconds, there will be only 100 seconds left before Apex kicks you out of the application.

You can easily see this behavior in action if you set a rather short session life time, say 60 seconds, in a test application (don't do this in a production application please).
For your convenience I created a simple demo application on apex.oracle.com

Now, you can work around the problem by setting the session duration to the maximum of 12 hours (43200 secs) in case you don't mind about it or there are no special security requirements. Setting the session duration to a value higher than 12 hours is worthless because there is scheduled job that deletes sessions older than 12 hours anyway.

Or perhaps it could make sense to use this trick: put the (in)famous meta "refresh" HTML tag in the login page header with a reasonable refresh time, so we can ensure that the user is presented always with a fresh login page.

Now, the question is: how much is a reasonable value?
I think it depends on the session lifetime you define.  For short session durations, it must be a small number, for long session durations it can be much larger, for the reason explained above, users surely don't like to be thrown out of the application after a very short time.

For instance, if the total session duration is defined to be 2 hours and you reload the login page every 10 minutes, the user in the worst case will still have 1 hour and 50 minutes available that is roughly 90% of the maximum time. On the contrary, if you refresh every hour, in the worst case there will be just 50% of the time left.

<meta http-equiv="refresh" content="600;&LOGOUT_URL.">

Again, you can see a second demo app showing how it works. I set the refresh to 20 seconds, in this case.

In case you don't like to hard-code a fixed value, it's possible to retrieve the maximum_session_life_seconds from Apex's dictionary view APEX_APPLICATIONS (Apex 4.0 or better only) and store it in an application item that you can put inside the page header in its substitution string form:

<meta http-equiv="refresh" content="&LOGIN_REFRESH.;&LOGOUT_URL.">

Application item LOGIN_REFRESH can be calculated in a before header computation that runs only in your login page and you can adjust the value depending on some algorithm, for instance 10% of the session lifetime.

It is worth noting that besides the numerical value I also added LOGOUT_URL substitution variable, because after some tests I realized that this is the only way to avoid reloading the page with the same session number.

Please note that this approach requires that you always specify the maximum session life at the application level, because as far as I know, there is no dictionary view retrieving the instance-wide value.

Friday, August 22, 2014

Sharing SQLDeveloper SQL history across several machines with Dropbox

Always check out the original article at http://www.oraclequirks.com for latest comments, fixes and updates.

A quick tip for keeping a single shared SQL history repository among different installations of SQLDeveloper, for the same user, on distinct computers synchronized with Dropbox.
This seems to be working just fine on Mac OS X and SQLDeveloper 4.0.2 however it's a totally unsupported hack, which means ** use at your own risk **.

I ignore what will happen when I'll upgrade to the next release of SQLDeveloper but I'll find out sooner or later.
From what I've seen, SQL Developer makes a copy of the history folder when it upgrades an existing installation, may be it crashes or may be I'll have to repeat the whole procedure, I don't know, for the moment I really appreciate the fact that I can pick up a work on a different location exactly from where I left off without having to worry about making copies of certain queries manually.

As your SQL history is going to be bigger than before, it makes sense to increase the maximum number of statements that you want to keep, to do so go to Preferences/Database/Utilities/Worksheet pane and increase the SQL history limit and remember to do so on all the machines, setting the same value *before* proceding with the next step, otherwise the installation with the shortest history will wipe out the files added by the others!

From the shell prompt, assuming you are in your home folder and SQLDeveloper is NOT running:
cd .sqldeveloper
cp -R SqlHistory ~/Dropbox/.SqlHistory
mv SqlHistory SqlHistory.old
ln -sf ~/Dropbox/.SqlHistory/ SqlHistory

SqlHistory.old is your safety net, in case something goes wrong.

Then, on the second machine you may either decide to get rid of the current history and use the history shared from the first machine or you may try to merge them.

In the former case, just execute:

cd .sqldeveloper
mv SqlHistory SqlHistory.old
ln -sf ~/Dropbox/.SqlHistory/ SqlHistory

In the latter case, in addition to the three steps above, execute also:

cp -R SqlHistory.old SqlHistory.tmp 
mv -n SqlHistory.tmp/* ~/Dropbox/.SqlHistory

Any duplicate filename (albeit with different SQL content) will remain in the SqlHistory.tmp folder, so you have a chance to decide what to do with it. If the SqlHistory.tmp is empty at the end of the operation, then get rid of it.

SQLDeveloper keeps the log as a bunch of XML files whose names are generated by prepending a pseudo random number (I ignore the details) to the fixed string "history.xml".
As I don't know if this number can be unique across different computers in virtue of some clever algorithm, I guess that occasionally there could be a conflict with an existing one, but when I merged the histories of my SQL Developer installations, there were no conflicts at all (may be the magic number is based on the timestamp, which explains why there were no conflicts as I haven't been given the gift of ubiquity...)

If there is a third machine, repeat the same steps as for the second one and so on.

Probably it could make sense to do the same with the library of connections, but this is an exercise I'll keep for another occasion.

I guess the same procedure should work also on most Linux distros as I don't see any difference under the hood.

Tuesday, August 12, 2014

ORA-27476: "SYS.MY_FANCY_JOB_CLASS" does not exist

Always check out the original article at http://www.oraclequirks.com for latest comments, fixes and updates.

Ever exported a whole schema containing DBMS_SCHEDULER jobs with non-default job classes?
Chances are that you hit the following error message when you attempt to import the dump later on.
ORA-39083: Object type PROCOBJ failed to create with error:
ORA-27476: "SYS.MY_FANCY_JOB_CLASS" does not exist

Job classes defined with DBMS_SCHEDULER.CREATE_JOB_CLASS are not owned by the schema user but by user SYS (as inferred by looking at the error message itself).

As a result, any jobs whose job class has not been re-created before importing the dump, won't be (re)created.

If, for some reason, the script for creating the job is not readily available but you have access to the database where the dump was taken, you can quickly generate a script for the job via the "Create like" feature of SQLDeveloper.
Note however that the missing job class must be (re)created manually, SQLDeveloper doesn't help with that.


See message translations for ORA-27476 and search additional resources.

yes you can!

Two great ways to help us out with a minimal effort. Click on the Google Plus +1 button above or...
We appreciate your support!

latest articles