CSS Quicktips : Removing RSS feed links from the Koha OPAC

Earlier today a fellow Koha user had asked :

“How to deactivate the News RSS link in OPAC?”

news rss feed link

The RSS feed link is contained in a <div> with the id named rssnews-container. In order to handle its visibility we need to simply add the following line to our OPACUserCSS system preference:

#rssnews-container {
    display: none !important;

And while we are on the topic, there is also another location where the RSS feed link shows up and that is in our OPAC search results page.


In this case however our approach needs to be slightly different as the link is presented as an anchor (a) element with the associated class rsssearchlink. Other than using a different CSS selector, rest of it is same i.e. we need to turn off the visibility of the element like this:

a.rsssearchlink { display: none !important; }

As we commonly say in Bengaluru (erstwhile Bangalore) – Enjoy maaDi ūüôā

Hiding “Authority search” and “Tag cloud” search options on Koha OPAC using CSS.

CSS one-liner to hide Authority search & Tag cloud options on the Koha OPAC.

Client partners not using authority files and tag cloud features of Koha, often insist that we remove the options – “Authority seach” and “Tag cloud” from the OPAC. Now, if we look at the DOM (Document Object Model) of the OPAC main page, we will see that these searches are located inside a <div> element named moresearches.

<div id="moresearches">
      <a href="/cgi-bin/koha/opac-search.pl">Advanced search</a>
      <a href="/cgi-bin/koha/opac-authorities-home.pl">Authority search</a>
      <a href="/cgi-bin/koha/opac-tags.pl">Tag cloud</a>

The other important thing to note here is the CSS (Cascading stylesheet) rules for the div,li elements and the li:after pseudo class [1].

div {
    display: block;
#moresearches li {
    display: inline;
#moresearches li:after {
    content: " | ";

Note: That li:after is what adds the ” | ” (pipe) separator between the options.

So, we are going to “hide” them and we will mainly use the CSS3 :nth-child() pseudo class selector to do this.

CSS3 :nth-child simplified

If you want a formal definition, go ahead and read this CSS/Selectors/pseudo-classes/:nth-child from the W3C (World Wide Web Consortium) wiki.

Let us see if it is better understood with an actual example! In our case, we can see that our <ul> (unordered list) element has 03 (three) <li> (list item) child elements. The nth-child is a counter that starts from 1 (one). Thus, nth-child(2) signifies the second “child” element. Here this would point to the second <li> i.e. the “Authority search”. Thus n in nth-child is simply a way to refer to a specific child.

The term “selector” is simply a rule / ruleset (i.e. code) that helps us to latch on to a particular element (or its content) by using either it’s (a) name; (b) id; (c) class, (d) contents, (e) position OR a combination of these or other attributes, and then do whatever we want to do with it. When selectors are used with CSS, we are usually talking about styling them.

Hint: making an element (or group of elements) in your DOM invisible (as in this case) will also be considered as styling.

The solution

While we can write the ruleset in a single line, here we have broken it up into two lines for better readability and understanding.

#moresearches > ul > li:nth-child(n+2) { display: none; }

#moresearches > ul > li:after { display: none; }

n by itself references the first child element. The notation n+2 in this case means select the other two <li> elements *after* the first one i.e. “Advanced search”. Once selected we set then to display: none;. This turns off their visibility while still retaining this elements in the DOM. The second line is more cosmetic in nature and turns off the visibility of the li:after pseudo class so that there is no dangling “|” displayed after Advanced search.


[1] http://www.w3schools.com/css/css_pseudo_classes.asp

Quick tip: Add “barcode” lookup to your OPAC’s search index selection downdown

If you wish to add an option to the OPAC search dropdown e.g. “Barcode”, you can achieve it with a single line of jquery. There is absolutely NO need to edit masthead.inc as suggested in BUG #8302 – http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=8302.This jquery one-liner does the job rather well, simply place it inside your OPACUserJS system preference.

$("select[name='idx']").append("<option value='bc'>Barcode</option>");

If you wish to know how I set the value to bc, I suggest you take a dive into this file ccl.properties in your Koha installation.

P.S. You can also search as bc: <your barcode number> in the search box. That works too, even without adding the option to the drop-down as you are directly passing a CCL query option back to the Koha search module.

This post is based on an earlier Facebook post published here https://www.facebook.com/l2c2technologies/posts/775648639191038 on Feb 23, 2015.

The 5-Minute Series: Adding Lightbox support to your Koha OPAC

Ever wanted to add a Lightbox enabled custom gallery page to your Koha ILS installation? Here’s how you can do it, with a bit of background and a brief touch on caveats and afterthoughts of implementing it.

Recently one of our client partners, Ms. Parama Sarkhel, Librarian, Ramakrishna Sarada Mission Vivekananda Vidyabhavan, requested us to add an image gallery with photos of the library and library events to her OPAC. We promised her that we’ll to look into it.


While we had setup image galleries for others, this time we decided to do something different. As any seasoned Koha user will know, you can add custom pages to your Koha OPAC. Adding a gallery typically means adding a new custom page with a bunch of photos and the nessecary CSS styles along with (usually) some Javascript to handle it. In fact, if you are not too bothered about supporting older browsers you can set up a gallery using *only* CSS3 alone.

NOTE 1: If you are a new user of Koha, you may wish read up Appendix H. Using Koha as a Content Management System (CMS) from the Koha 16.05 Manual (en). Be advised that the instructions for Koha as a CMS are slightly out of sync, so you need to change where it says “pages.tmpl” to “pages.tt” to be able to actually use these instructions.

NOTE 2: Prior to release of Koha v 3.4.3 [1] on July 25, 2011, Koha used the HTML::Template perl module for handling its template requirements. Via Bug id #5917 [2], Koha 3.4.3 switched over to Template::Toolkit templating system [3]. The reference to “pages.tmpl” is a relic from Koha’s HTML::Template past.

Introducing Lightbox2

To an average Web UI designer / programmer, Lightbox usually needs no introduction. However since the target audience of this blog includes people who do not neccesarily program, either for passion or a living, let us introduce the main “protagonist” of our post here i.e. Lightbox! Lightbox2 is a successor to the the original lightbox [4] script by Lokesh Dhakar. As described by it’s author Lokesh :

“Lightbox is small javascript library used to overlay images on top of the current page. It’s a snap to setup and works on all modern browsers.”

Users who are new to the term lightbox (you have probably used it online, without knowing that it is called a ‘lightbox’) can have a look at [4] below in the reference sub-section.

Making Lightbox2 work with Koha

Before we get out hands dirty and share with you what we did, here are a few things that we must keep in mind.

First the things that one must know

1. As on date the Koha OPAC uses an older version of Jquery javascript library i.e. version 1.7.2 (released [5] on March 21, 2012) AND a rather old version of the Twitter Bootstrap framework i.e. 2.3.1. So whatever lightbox script / library we choose to use, it *must* work correctly with these two software’s version as shipped with Koha.

2. Luckily Lightbox2 works with jquery 1.7.x or higher, which puts us in the clear about our choice of lightbox library.

3. When Lokesh’s Lightbox was released about 8 years ago, the world of web development was vastly different from today. Different browser versions (mainly IE) implementing “quirky” mode vs. “strict” modes differently, support of HTML5 / CSS3 often varied significantly from W3C standards, desktop PCs rather than smartphones and tablets ruled the world. In fact, we were also witnessing the closing moments of the second browser wars [6]. By selecting the up-to-date version of the original lightbox script, we ensure that we are using a mature javascript library with the largest possible cross-browser, cross-device compatibility *including* backward compatibility.

Now that we have all these history lessons out of the way, let us proceed with the actual steps to get this show on the road.

Step #1 – Downloading Lightbox2

Following the instructions on the Lightbox2 site, we proceeded to download the latest version of the library https://github.com/lokesh/lightbox2/archive/master.zip on our server and proceeded to unzip it.

Step #2 – Installing Lightbox2 inside our OPAC’s DocumentRoot

Since we are using Debian 8.4 on our server with Koha being installed via the .deb package route, our OPAC’s DocumentRoot is located by default at /usr/share/koha/opac/htdocs. As our server is multi-tenanted [7], we are going to create a new folder lightbox2 under this location so that every Koha instance running off this server can take advantage of the Lightbox installation.

sudo mkdir /usr/share/koha/opac/htdocs/lightbox2

Next we shall copy over the folders under the /dist folder from inside our unzipped copy of Lightbox2.

sudo cp -R dist/* /usr/share/koha/opac/htdocs/lightbox2/.

Three folders should be copied over css,images and js. The important files that we will need for our work are:

Step #3 – Getting Koha to work with Lightbox

Now that we have the Lightbox2 library installed on our server, it is time to tell Koha how to use it. And the way we are going to do that using the very important OPAC system preference OPACUserJS[8] and some jQuery magic that will help us load up the library into our OPAC’s Document Object Model (DOM)[9]. This is how we will do it:

$(document).ready(function () {
  $.getScript('/lightbox2/js/lightbox.js').done(function( script, textStatus ) {
    console.log( textStatus );

The first line $("head").append(""); inserts the necessary stylesheet (lightbox.css) into the <head> section of the OPAC. The second line uses $.getScript[10] to dynamically load lightbox.js into the page using an AJAX based HTTP GET request when the OPAC page’s DOM has been fully loaded. The last line is used to log the success or failure of the attempt to load the file lightbox.js.

Step #4 – Using Lightbox

Adding the data-lightbox attribute a image link activates the lightbox capability for the image of our choice. The value of the data-lightbox attribute should be unique on the page on which the image is, unless we want to create a set of images on that page which should all be served by Lightbox . Basically, using the same value for a set of image links creates a manual carousal for that page. Since we are creating an image gallery we will use the same value for the entire set of images placed on and linked to from our gallery page. In fact, in our case, we wanted to show *two* galleries on the same page – one showing every day activities, another showing a recent event organised by the library. So, we ended up using *two* data-lightbox values – one for each of the sets.

This is where we define our custom page and the custom CSS styles to set things up and ready for Lightbox to take over.

A snippet of our custom page’s markup as follows, the data-title attribute adds the captions to displayed by Lightbox. As you can see, all the four example images share the same data-lightbox attribute i.e. “gallery-1” marking them as a part of a single set or gallery of images.

<div class="span3 l2c2galimg">
  <a href="gallery/rksmvv_gallery_02.jpg" data-lightbox="gallery-1" data-title="Reading Room - Journal Section">
    <img src="gallery/rksmvv_gallery_02.jpg" alt="Reading Room - Journal Section" />
  <div class="desc">Reading Room - Journal Section</div>

<div class="span3 l2c2galimg">
  <a href="gallery/rksmvv_gallery_03.jpg" data-lightbox="gallery-1" data-title="Circulation Counter">
    <img src="gallery/rksmvv_gallery_03.jpg" alt="Circulation Counter" />
  <div class="desc">Circulation Counter</div>

<div class="span3 l2c2galimg">
  <a href="gallery/rksmvv_gallery_04.jpg" data-lightbox="gallery-1" data-title="E-Resource Centre">
    <img src="gallery/rksmvv_gallery_04.jpg" alt="E-Resource Centre" />
  <div class="desc">E-Resource Centre</div>

<div class="span3 l2c2galimg">
  <a href="gallery/rksmvv_gallery_05.jpg" data-lightbox="gallery-1" data-title="Photocopy Service">
    <img src="gallery/rksmvv_gallery_05.jpg" alt="Photocopy Service" />
  <div class="desc">Photocopy Service</div>

In the following snippet note that the data-lightbox attribute is different, thus marking these entries as a separate set or gallery.

<div class="span3 offset3 l2c2galimg"><a href="gallery/rksmvv_gallery_01.jpg" data-lightbox="gallery-2" data-title="Inauguration of Research and Resource Centre - #1"> <img src="gallery/rksmvv_gallery_01.jpg" alt="Reading Room - Journal Section" />
<div class="desc">Inauguration - #1</div>
<div class="span3 l2c2galimg"><a href="gallery/rksmvv_gallery_10.jpg" data-lightbox="gallery-2" data-title="Inauguration of Research and Resource Centre - #2"> <img src="gallery/rksmvv_gallery_10.jpg" alt="Circulation Counter" />
<div class="desc">Inauguration - #2</div>

The custom CSS classes l2c2galimg and desc as seen in the above snippets are simply used to style the images and the captioning <div> and are not related to lightbox’s CSS declarations and so you are free to define your own styling.

Show me the money!

See the Lightbox enabled galleries in action here https://rksmvvlibrary.in/pages.pl?p=gallery. Simply click on any of the images and watch Lightbox take it over. As these are galleries you can scroll through the images by touching / taping / clicking the left or right side of any image in the set or even the left/right arrow keys on your keyboard to go back or forward. Clicking or touching the “XCLOSE widget or for that matter anywhere on the screen outside the lightbox popup will take you back to the gallery page.

Caveats and afterthoughts

1. You do not neccesarily need to put your list of images into separate divs of spanX class as we have done here. Lightbox will work equally well with unordered lists <ul> classed with Bootstrap 2.3.1’s thumbnail styles.

2. This set of instruction will work with all the previous versions of Koha that uses Bootstrap 2.3.1 and jQuery 1.7.2 for the OPAC. Basically that means all versions of Koha which have shipped with the bootstrap theme i.e. Koha 3.14 and above.

3. Lazy loading[11] the lightbox.js library using getScript AJAX call comes with *one* penalty though. The script is loaded timestamped and hence not cached, meaning it gets loaded each time this gallery page is opened by a visitor. You may be able to workaround the lack of caching by using the jQuery.ajax call and setting the datatype to script and indicating an expiry value for lightbox.js. We haven’t tried it, so YMMV!

4. Of course, if you deem this to an important enhancement (we don’t) then you can of course file a new enhancement bug and submit a patch to Koha community.


[1] Koha 3.4.3 is now available

[2] Bug 5917 – Switch Koha to use Template::Toolkit

[3] http://template-toolkit.org/

[4] Lightbox (JavaScript) – From Wikipedia, the free encyclopedia

[5] jQuery 1.7.2 Released

[6] Second browser war – From Wikipedia, the free encyclopedia

[7] Multitenancy – From Wikipedia, the free encyclopedia

[8] OPACUserJS – Koha 16.05 (En)

[9] Document Object Model – From Wikipedia, the free encyclopedia

[10] jQuery.getScript()

[11] Lazy loading – From Wikipedia, the free encyclopedia

Koha on an OpenVZ VPS? Better make sure you have the right time!

If your Koha ILS is on a OpenVZ VPS located in another country, then you need to specifically tell MySQL about your timezone… i.e. if you value your sanity!

The heart of everything to do with computers lies in the most intangible of things – time!¬†We all want to know at what time was a record entered, when did our patron borrow / return a book, when is fine to be calculated, when are our IndexData Zebra re-indexing is supposed to run, when are our email / sms notifications supposed to go out etc. Mostly Koha keeps track of all these (and much more) using datetime data in the database tables. Most of these fields are set by default to CURRENT_TIMESTAMP. As a result when we transact using Koha or update our records, the MySQL server automatically fills in the exact datetime values using the system time. This “automatic” business can be an aid or a hindrance – as you will get to see in a moment.

The backstory a.k.a how we ended up with this

We were using an OpenVZ based VPS located in the US east coast for a client who is located in India. Since OpenVZ VPSes have their time controlled by the host system, there is not much we can do (unlike we can with KVM or Xen based virtualization). Luckily the host itself was setup with ntpdate, so at least the time was accurate. We took the next best route, setting up the timezone using ‘Asia/Kolkata’, which is the TZ definition for India on Linux.

$ sudo dpkg-reconfigure tzdata

It exited successfully showing the following output:

Current default time zone: 'Asia/Kolkata'
Local time is now:      Wed Sep  7 04:00:01 IST 2016.
Universal Time is now:  Tue Sep  6 22:30:01 UTC 2016.

The problem

Restarting the VPS, we proceeded with our task and things looked OK. Around 4:39 AM while importing a .mrc file we saw something that clearly showed we had missed something. The lcoal time was September 07, 2016 4:41 AM and yet Koha told us that we had uploaded the .mrc file yesterday – September 06, 2016 at 7:11 PM! Woah!


We quickly referred to the super awesome Koha database schema website and looked up the “import_batches” table. Sure enough, the upload_timestamp was defined as timestamp with the default set as CURRRENT_TIMESTAMP. Thus, while our timezone correction had worked with the rest of system *and* Koha, MySQL was using something else as its time.

Switching to a terminal we fired up the command-line mysql client and took a closer look.

mysql> SELECT @@system_time_zone;
| @@system_time_zone |
| EDT                |
1 row in set (0.00 sec)

Hmmmm… Eastern Daylight time is -4:00 hours behind GMT, just as we are +5:30 hours ahead. A second query further confirmed this.

mysql> select now();
| now()               |
| 2016-09-06 19:29:17 |
1 row in set (0.00 sec)

The solution

A quick read of http://dev.mysql.com/doc/refman/5.5/en/time-zone-support.html and a cross-check inside our /etc/mysql/my.cnf – the mysql configuration file, confirmed that we needed to setup our default timezone and move our MySQL server from the system timezone defaults coming in from the host system’s EDT timezone.

“The value can be given as a named time zone, such as ‘Europe/Helsinki’, ‘US/Eastern’, or ‘MET’. Named time zones can be used only if the time zone information tables in the mysql database have been created and populated.” – MySQL documentation

Going back to our mysql client, we decided to check the time_zone and time_zone_name tables inside the mysql system database. Both turned out to be blank.

mysql> use mysql;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed

mysql> select * from time_zone;
Empty set (0.00 sec)

mysql> select * from time_zone_name;
Empty set (0.00 sec)

Since we must have the time_zone_name table populated with our zoneinfo data (which resides under /usr/share/zoneinfo/) we turned to the nifty utility mysql_tzinfo_to_sql.

$ mysql_tzinfo_to_sql /usr/share/zoneinfo | mysql -u root -p mysql

It went through and we checked the time_zone_name table again. This time time_zone_name returned 1808 rows. With our timezone reference setup, we turned to /etc/mysql/my.cnf and added the following line under the [mysqld] section in the file:


Restarting the MySQL server with sudo service mysql restart we re-checked select now() and SELECT @@system_time_zone. The results were:

mysql> use mysql;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select now();
| now()               |
| 2016-09-07 05:44:00 |
1 row in set (0.00 sec)

mysql> SELECT @@system_time_zone;
| @@system_time_zone |
| IST                |
1 row in set (0.00 sec)


We reloaded the Manage staged MARC records page in Koha and sure enough it now showed the correct date and time of the upload.

The 5-Minute Series: JavaScript or CSS – what is better at hiding the ‘No cover image available’ markup?

How to efficiently remove the ‘No cover image available’ placeholder using CSS rather than Jquery, when AmazonCoverImages or GoogleJackets can’t retrieve a cover image for your displayed title.

We really like our OPACs to display the cover images of the books and journals which we have often so painstakingly cataloged. Out of the box, Koha allows us to fetch and display book covers from several sources, both local and from data providers over the Internet. The commonest of these being (a) AmazonCoverImages and (b) GoogleJackets. While these two settings (either or both) will work for a large number of books, for users in India, there are also a lot that these two sources do not offer. Especially books in Indian languages or simply books printed without an ISBN (there are a lot of these in India).

When no image is found Koha displays the text “No cover image available” as a placeholder. A lot people would rather not see this. The Koha JQuery Library on the Koha Community wiki offers the following jQuery based approach:


You simply place this snippet into the OPACUserJS system preference and hey Presto! these pesky “No cover image available” displays are history. Well, yes and no! Yes it works, and “no”, this may not prevent that text to be displayed, at least once. Why? well how about a slow PC and an even slower internet connection? Either one of the two or a combination of both will usually ensure that you get to take a look “No cover image available” before they are “removed” from the displayed page.

The simpler thing IMHO, is to take the Cascading Style Sheet (CSS) route, which as simple as placing the following one-liner into your OPACUserCSS system preference:

.no-image { display: none; }


Not only will you *never* get to see the “No cover image available” markup anymore, no matter how slow your PC / Internet connection is, this is also more efficient, since rather than use jquery to first select and then .remove() a DOM element with a particular style class attribute, we simply re-define that they shouldn’t be displayed at all. The browser DOM does all the optimization in this approach.

A custom subject-wise report of titles with author name, no. of copies, subject name in serialized listing

A custom SQL report for Koha that generates subject wise title lists with author name, no. of copies, subject name and biblionumber, written in response to a reader query over email.

Last week Mr. Gautam Mukhopadhyay, Librarian, Chandrapur College in Burdwan, West Bengal wrote in with a request:

Respected Sir,

I’m writing this seeking a solution for the problem relating to a report generation from Koha.¬†I want to get a list of titles under a particular broader subject¬†field-tag¬†(650). Quite a number of times I’ve checked from SQL Report. But all were in vain as those were not the same what I actually want to get. Following is the specimen of the opted report:
Sr. No.   Title     Author   Copy No.    Subject
¬† 1 ¬† ¬† ¬† ¬† ………. ¬† ¬† X ¬† ¬† ¬† ¬† ¬†3 ¬† ¬† ¬† ¬† ¬† ¬† Bengali
Under the subject Bengali or English or whatsoever, I want to get the titles those are belong to that particular subject. However, it won’t be a problem if there are different reports for different subjects. It’s Ok. But the SQL Query should be a general query structure that can be applicable for all such reports on the titles belong to a broader subject like Bengali, History, Geography etc.
Sir, please let me know the query structure, if possible.

¬†Here is a possible solution for his request, which pretty much does what Mr. Mukhopadhyay had specified in his request.¬†In this example we’ll use a sample MARC21 file which can be downloaded from here to try out this example. This dataset has a 14 unique bibliographic records with a total of 42 item (holdings) record, belonging to 03 specific broader subjects i.e. English, Economics and Political Science. As per Mr. Mukhopadhyay’s use-case, the MARC field 650 holds the broader subject classification. However, to match real world scenarios the 650 fields in some of the cases have other subject headings defined including narrower terms. Also additionally we are going to add an additional column to our report – the biblionumber, so that if required we can cross check a title in the report generated against the biblionumber in the database.

CAVEAT EMPTOR: If you are going to try out this example, we suggest that you define a new Koha library and import this MARC file into it. Mixing this sample data with your existing records is strongly advised against.

Step #1 – Create a new Koha instance and set it up
We are going to use the koha-create Debian command to create a new Koha instance and we shall call our instance as demo.
sudo koha-create --create-db demo

You may calls your instance by whatever name you like.¬†If you are not aware of the koha-create command, please read up “Commands provided by the Debian packages“.¬†Next we will do a default setup and proceed to define a Library that we’ll call “L2C2 Technologies Demo Library” identified by the code “MAIN”, using these instructions here.

NB.: To use the marc file used in this example you must set the library code for your demo branch as “MAIN”, the name (of the library branch) can be whatever you want it to be.

Step #2 – Define a new Authorize value category
Since our example marc file has biblios with only (a) English (b) Economics and (c) Political science, we will define a new authorized value category which we’ll call as SUBLOOKUP, under¬†Home ‚Äļ Administration ‚Äļ Authorized values. Once setup our new authorized value SUBLOOKUP will look like this:
This authorized value list will provide the¬†subject selection list for¬†our¬†custom SQL report. So if you have more subjects you will need to add them here so that they look like this. The “%” in the Authorized value is *critical*, and if you want to be really strict about it, you can drop the preceding “%” and retain only the one at the end. However should you do that, your first 650 field *must* always be the broader subject heading that you wish to filter your report on.
Step #3 – Define our custom SQL report
We will go to Home ‚Äļ Reports ‚Äļ Guided reports wizard ‚Äļ Create from SQL and create a new SQL report. In this case, we’ll name the report as “List title with number of copies filtered by subject”, add a note that says – “A report written at the request of Mr. Gautam Mukhopadhyay, Chandrapur College, BWN”. The SQL will be as given below. The report once¬†saved, will allow us to run it.
 (@row:=@row+1) AS `S/N`, 
 REPLACE (@TargetSubject:=<<Select the subject|SUBLOOKUP>>, '%', '') AS Subject, 
 gmData.biblioid AS `Biblionumber` 
 biblio.title AS Title, 
 biblio.biblionumber as biblioid, 
 ExtractValue(biblioitems.marcxml,'//datafield[@tag="245"]/subfield[@code>="c"]') AS Author, 
 count(items.itemnumber) AS Copies, 
 ExtractValue(biblioitems.marcxml,'//datafield[@tag="650"]/subfield[@code>="a"]') AS Subject 
 biblioitems on (items.biblioitemnumber=biblioitems.biblioitemnumber) 
 biblio on (biblioitems.biblionumber=biblio.biblionumber) 
 biblio.biblionumber) as gmData, 
 (SELECT @row := 0) r 
 WHERE Subject LIKE <<Re-select the subject tag|SUBLOOKUP>>
Let us take a moment to understand what this piece of SQL syntax really means.
(@row:=@row+1) AS `S/N`,


(SELECT @row := 0) r

The use of the @row variable and the counter (@row:=@row+1) gives us our “serial number” column in the report listing. ¬†We can also see the authorized value list “SUBLOOKUP” that we had defined earlier referenced here in the SQL.

NOTE: As you may note, we are asking the user to select the subject *twice*, (first time: ‘Select the subject’ and second time: ‘Re-select the subject tag’).¬†While theoretically we should not be required to do so, thank to the use of the runtime variable @TargetSubject, in reality we ran into a type casting error (see below), thus we used this less than pretty way of asking the user to select the subject twice, to get our job done.


Step #4 – Running the report

After the report is saved, it is now time to run it, using the “Run report” option. What we’ll see now will be like this:


We need to select the *same* subject from both the drop-down lists and click on “Run the report” button. Selecting “Economics” we shall in our case get the following report:


Step #5 – Prettifying the custom report user interface

Having the user to select the subject twice is cumbersome as well prone to human error, so we decided it is time for some jquery magic to streamline this and leave the users with one only a single drop-down to choose from. For this we’ll turn turn to the IntranetUserJS system preference and add the following jquery snippet:

 $("#sql_params_Selectthesubject").change(function() {
   var subval = $('#sql_params_Selectthesubject').val();

If this is the first time you are hearing about the IntranetUserJS system preference, you should definitely read up this. Those of you who are indeed familiar with IntranetUserJS, all we are doing here is to (1) hide the second subject selection dropdown and its label and then (2) we are defining that whenever the user chooses a value from the *first* drop-down, the second (and now hidden) drop-down should also have the same value selected automatically. After saving the IntranetUserJS update, on running the report we shall see this:


And bingo! We are done!
Extraa Innings: To see the actual report in action
  1. Go to the URL https://demo-staff.l2c2academy.co.in/
  2. Use User name / Password : demo / demo
  3. Go to the section¬†Home ‚Äļ Reports ‚Äļ Guided reports wizard ‚Äļ Saved reports
  4. Select “Run” from the “Actions” dropdown at the right.gmreport_07
  5. Play with the subject selection options to see the different outcome.



MarcEdit QuickTip #3 – Getting your 952 (items / holdings data) field in place for import into Koha

Shows how to de-duplicate a .mrc file, by merging duplicate bibliographic records spread all over the file and then gather up the holdings record into repeatable 952 field that Koha expects its for item records.

Last night Pawan Sharma, a fellow user on “Koha Users” reached out for some help with importing his items into Koha. Like many other, he too had moved his catalog data from Microsoft Excel spreadsheet to MarcEdit utilizing MarcEdit’s “Delimited Text Translator” feature which at the end of the process had given him a .mrc file. This he proceeded to upload into Koha by using the More > Tools > Catalog > Stage MARC records for import¬†option.

There were no surprises here, *except* that for every single books with multiple copies Koha imported each of the copies as a separate biblio record, instead of a single entry for the biblio with multiple item records attached to it via Marc21 952 repeatable field that Koha uses for managing holdings data. Simply put his data needed to be de-duplicated with the holding data merged back before import, typically using the ISBN number of the records (MARC field 020).

NOTE: If you wish to read more about Koha’s holding records schema see “Holdings data fields (9xx)” from the Koha Community wiki.

For someone who has not done this before, MarcEdit’s de-duplication and then merging it can seem like a daunting task. This post will hopefully demystify the process.

The discussions on Koha Users were based on a lot of assumptions, especially with no idea about Pawan’s data. So, I offered to take a look at it. He first sent me a .mrc file that had 12806 records, which I immediately converted into MarcEdit’s MarcBreaker mnemonic, human readable format.


And proceeded to take a “Field count” report (see under “Tools” menu of MarcEditor) to check exactly how many records had ISBN (MARC21 field 020) out of the total number of records.


The result as can be seen above – NOT A SINGLE ONE of the 12806 biblio records had an ISBN number! Well, this file can be de-duplicated and merged, but *not* using MarcEdit. Only being told about this Pawan mentioned that he had other .mrc files that had ISBN and so he sent a second .mrc (LG-32016-32979.mrc) file over. Turns out of the total of 965 biblio records in this second file, 828 records had ISBN numbers defined.


The¬†next task was to extract the¬†records that *had* ISBN numbers. The remaining 137 can not be dealt with in this process and will have to be dealt with separately.¬†For now, we closed the file LG-32016-32979.mrk file with 965 records and went back to the MarcEdit main window in order to use the “Delete Selected Records” option available under Tools > Select MARC Records


The next few steps are¬†simple, if not immediately apparent to a new user of MarcEdit. We’ll use the numbered markers on the screenshot to explain it in steps. First, we selected the LG-32016-32979.mrk file with the 965 records in step #1; next we typed in 020 (since we want to match for ISBN) in the Display Field option (by default it shows 245$a); third step was to click on “Import File” button. After the file is imported (takes just a second or two depending on your file size) this the top-left data grid which was blank so far, will show up data similar to this. Finally in step #4, we will click the “Does Not Match” link. Records that do not have an ISBN number will be selected just like the big red arrow here shows.

marcedit_02CThe last step is to click on “Delete Selected”, this will open a File Save dialog with the title “Remaining Records”.¬†In the case, we provided the name LG-32016-32979_ISBN.mrk¬†and saved it and exited from this deletion utility.

This file LG-32016-32979_ISBN.mrk now has the 828 records with ISBN numbers and each of which has a holding records. This is what we will work with for the deduplication process.


Using the Tools > Record Deduplication option of MarcEditor, we will now remove the duplicate records into a separate file and save it with the name LG-32016-32979_ISBN_DEDUP.mrk. We will use ISBN as the field to use to identify duplicates. A popup showed us that 828 records processes, so we are done with deduplication. We will also need to save our original work file LG-32016-32979_ISBN.mrk. This file now contains biblio records with unique ISBN number. A quick check with the Fields Count tool showed us there were now 523 records (down from 828 records originally, the rest 305 records are the duplicates that are now saved in LG-32016-32979_ISBN_DEDUP.mrk).


¬†Now for the next step MARC Merge, which was the last step in this process. We have to go back to the main MarcEdit window and use the menu option Tools > Merge Records. The order of files we specify here is highly¬†*important*. The “Source File” in this case was¬†LG-32016-32979_ISBN.mrk¬†(the file with the 523 records with unique ISBN numbers), the “Merge File” is¬†LG-32016-32979_ISBN_DEDUP.mrk (the file where we had removed the duplicates to in the previous step) and finally, “Save File” is simply the name of the new merged file we are going to create (Hint: this is the final file that we will push to Koha). We named the final file as¬†LG-32016-32979_ISBN_MERGED.mrk. The Record Identifier is of course 020 (i.e. ISBN number) and we move on the next screen.


This is next step is basically *everything* we have been working for in this post so far, we select the field to merge in from “Merge File” into the “Source File” and click next.

marcedit_02GIn this¬†case everything went well and we were presented with the following screen that said “Merge Completed” and gave us the full path and filename to our merged file¬†LG-32016-32979_ISBN_MERGED.mrk.


Of course we opened up the LG-32016-32979_ISBN_MERGED.mrk file in MarcEditor. The first thing was to check the Field Count report, and this is what we saw 523 biblio records with a total of 828 holding records, which sounds right! Below is example of the merged holdings.


Of course there is still the task of exporting the MarcBreaker (.mrk) back to .mrc so that Koha can ingest it for its MARC21 staging workflow, but everyone knows that ūüôā

NOTE: For reference to this tutorial I’m attaching the zip file containing all the¬†LG-32016-32979 files used in this example.

Easy peasy way of automating remote backup on Google Drive for your Koha database

This post discusses how to automate your Koha ILS’s MySQL database backup on to Google drive and send an email when it is complete. It shows how you can take advantage of Google Drive’s 15GB space for free (Dropbox only gives you 2GB on the free access) and do it all from the command line and save the much needed RAM for your Koha server rather than waste it on the GUI, which is also a security risk. Further this attempts to introduce the novice readers into details of the commands they are supposed to follow, with further reading resources, should they be inclined to learn more.

Having your Koha ILS database to be regularly backed up on to remote, cloud storage is an excellent idea. By doing so you ensure a critical off-site, disaster recovery measure, which is very good. However, as with all things human, if we leave it on ourselves to do it, there will come to pass a time when we will (a) forget to do it or (b) be unable to do it for some
reason. As we all know good ol’ Captain Murphy’s Law[1] will strike us whenever we are least prepared; in this case typically that one time we forgot or were unable to take the backup, the darned¬†thing will crash!

So backup automation is key. Not only it ensures regularity without fail. It also removed one more essential chore from our immediate plate, thus leaving us free to do other things without feeling guilty over this key housekeeping chore.

Cloud backup – Google vs Dropbox

Dropbox and Google Drive comes across as immediate choice of cloud based backup. However, their free editions differ [2]… only by about 13GB of space between them. So for long-term online backup¬†Google Drive is the de-facto choice.

Our objective

So, here is what we set out to do:

  1. create a datetime stamped backup of the database; (so we can tell just by seeing the filename when the backup was taken)
  2. compress it with bzip2 utility; (so all those loooooong lines of SQL text do not take up so much space, a text file can compress up to within 10% of it original size)
  3. upload it to a specified folder on Google Drive; (so that all our backups remain in one place, date-wise)
  4. email the user that the remote backup process is complete. (so when we outside or on vacation and don’t have access to our workstation, we still get a notification when it was completed and if we don’t get one, then that something certainly went wrong and someone should do something about it)

And of course, since we are talking about making this happen everyday at the same time, we need to create a cron job that will deliver all of 1, 2, 3 and 4 to us in a single neat little command.

As you all know, no self respecting system administrator will ever be caught running the X11 windowing system on a production server. So we are going to do these the way real system admins do: from the command line.

NOTE: X11 is the geekspeak for the Graphical User Interface (GUI) environment we see e.g. when we log into an Ubuntu Desktop (which is typically the Unity desktop)

Command line in this day and age? Are you nuts???

No! And here is the reason. X11 is not only an inherently insecure protocol that puts your production system at risk, it is also (compared with a command¬†line only system) a tremendous resource hog! We all know that more free memory (RAM) is usually-a-good-thing ‚ĄĘ, so instead of wasting our precious RAM on running a GUI (and all the unnecessary software along with it making it slow¬†*and* insecure) we are going to show you how to do this all from a command line. One other thing: if you ever need the assistance of an expert, you will find that command¬†line setups are also easier to debug (for an expert), after all, aren’t they always asking you to check your “logs”? All those are after all command line output. So like the Chloromint ad below, please don’t ask us again why we love the command line! ūüėČ


We want a normal user account with no admin privileges; say in our case we will call it l2c2backup and we will do it from the terminal using the adduser l2c2backup command. See below:


Next up, we need to switch over to the new user account and create a synchronization folder for Google drive.


At this point, we’ll press “Ctrl+D” and exit from the l2c2backup user and come back to the root user or sudo user, for we now need to install a command line google drive client on our system.¬†We are going to use the (almost) official Google Drive command line client for Linux known simply as “drive” and available from¬†https://github.com/odeke-em/drive

Since we are using Debian, we have the advantage of using the pre-built binaries, which we shall install in the following manner by executing in turn each of the commands:
# apt-get install software-properties-common
# apt-add-repository 'deb http://shaggytwodope.github.io/repo ./'
# apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 7086E9CC7EC3233B
# apt-key update
# apt-get update
# sudo apt-get install drive

NOTE:If you are using Ubuntu or other mainstream Linux distributions, you can use the instructions given here on the Platform Packages page.

Once we have completed installation of “drive“, we now need to go back to our /home/l2c2backup/gdrive folder as the user l2c2backup¬†and initialize the sync folder (i.e.¬†/home/l2c2backup/gdrive) using the command “drive init


Copy the really long URL that the command tells you to visit and open it in your web browser. You will see an application authorization dialog screen come up, click on the¬†“Allow” button.


NOTE:¬†Before pasting the URL, you must make sure that¬†at this point you are logged in into the actual¬†Google user account where you want to send the backups to. Don’t make a mess here.

Assuming you did everything as I have mentioned so far, you will be automatically redirected to the page with the authorization key. It will look pretty much like the one below. Of course, every request will generate a separate access authorization key, so use the one generated specifically against your request.


Copy this key and paste it back at the prompt in your terminal window and press <ENTER>. DO NOT TRY TO TYPE IT OUT BY HAND, COPY-N-PASTE IS THE ONLY WAY HERE!

If you have done everything alright then you should be back at the command prompt without any error or any other message. Your sync folder should now be ready.

Putting our solution together

Now that we have the Google Drive sync ready, it is time to look at each piece of our basic requirement.

1. Creating a datetime stamped backup of our database

First we need to create the name of our output file for the MySQL backup. For this we shall use this: BACKFILE="<dbname>.$(date +"%Y%m%d_%H%M%S").sql;. The date format will give us a datetime string formatted as “20160723_000001” when the date & time is 12:00:01 (AM) on 23-July-2016. For this example, let us assume that the BACKFILE environment variable will hold the value: koha_ghci.20160723_000001.sql.

Note: replace <dbname> with the actual name of your Koha database, which in our case is koha_ghci. So, the syntax for us looked like: BACKFILE=koha_ghci.$(date +"%Y%m%d_%H%M%S").sql;. If you want to learn more about the format specific to the date command, you can read up this.

Next we will create the actual db backup using the datetime stamped output filename we just created. For that mysqldump -u<mysql_db_username> -p<mysql_db_passwd> <dbname> > /home/l2c2backup/gdrive/$BACKFILE.

Note: replace the <mysql_db_username>, <mysql_db_passwd> and <dbname> placeholders with your actual values. In our example case, the actual backup command string looked like this: mysqldump -ukoha_ghci -pASx2xvercbHXzs2dP koha_ghci > /home/l2c2backup/gdrive/$BACKFILE.

2. Compressing our SQL export

The previous step had exported our koha_ghci database as koha_ghci.20160723_000001.sql. We shall now compress this with bzip2 /home/l2c2backup/gdrive/$BACKFILE, which will give us the compressed file koha_ghci.20160723_000001.sql.bz2

3. Upload the compressed SQL backup to Google drive

Before we proceed with the actual upload, we should create a dedicated directory *on* our actual Google drive to store our backups. Lets call this directory as DBBACKUPS and create it on our online Google Drive space. It should be mentioned here that the command for upload using this library we are using, takes the form of drive push --destination <remote_folder_name> <full_path_to_compressed_file>. This code will ask for confirmation and we need to pass “Y” for yes before it will proceed. So we need to take care out that by adding echo Y | before the drive push command.

So in our case it will be echo Y | drive push --destination DBBACKUPS /home/l2c2backup/gdrive/$BACKFILE.bz2

Note:If you wish to learn about the other various options you can additionally use with drive push, I suggest you read this for the details.

4. Sending an email when the upload is done.

We are not running a dedicated, full fledged mail server like say Postfix on this box. Rather we are using the lightweight mstmp-mta with our Gmail account as the mail relay. If you want to know how to configure it, I suggest that you read this tutorial, ignoring the “mutt” part which you do not require. It is very simple. We had email sending working in under a minute. That’s just how long it took use to configure it.

Note: Just remember you *must* have openssl installed otherwise you will never be able to talk to GMail. And also you will need to go to your Google account and enable support for that Google likes to call “less secure apps” (which means any app that does use Google’s OAuth2 protocol for authentication. You will be authenticating over TLS and it is a perfectly safe thing to do, so just ignore Google’s ominous tone and enable “less secure apps”.

Now that we have msmtp-mta up and running, we will send out that email using this: printf "To: <recipient_email_address>\nFrom: <your_gmail_address>\nSubject: <dbname> db backed up on GDrive\n\nSee filename $BACKFILE.bz2 on DBBACKUPS folder on Google Drive of <your_gmail_address>.\n\nBackup synced at $(date +"%Y-%m-%d %H:%M:%S")" | msmtp <recipient_email_address>

In our case that happened to be printf "To: monitoring@l2c2.co.in\nFrom: indradg@gmail.com\nSubject: KOHA_GHCI db backed up on GDrive\n\nSee filename $BACKFILE.bz2 on DBBACKUPS folder on Google Drive of indradg@gmail.com.\n\nBackup synced at $(date +"%Y-%m-%d %H:%M:%S")" | msmtp indradg@l2c2.co.in.

5. Putting it all together

Now that we have all the parts of the puzzle in place, it is time to assemble it into a single piece. And the way, it worked for us was BACKFILE=koha_ghci.$(date +"%Y%m%d_%H%M%S").sql; mysqldump -ukoha_ghci -pASx2xvercbHXzs2dP koha_ghci > /home/l2c2backup/gdrive/$BACKFILE && bzip2 /home/l2c2backup/gdrive/$BACKFILE && echo Y | drive push --destination DBBACKUPS /home/l2c2backup/gdrive/$BACKFILE.bz2 && printf "To: indradg@l2c2.co.in\nFrom: indradg@gmail.com\nSubject: KOHA_GHCI db backed up on GDrive\n\nSee filename $BACKFILE.bz2 on DBBACKUPS folder on Google Drive of indradg@gmail.com.\n\nBackup synced at $(date +"%Y-%m-%d %H:%M:%S")" | msmtp indradg@l2c2.co.in

Note: The reason we used the “&&” is that in BASH it stands for what is called as “Logical AND”. In simple English this merely means that unless the previous command is not not executed successfully, whatever comes next simply won’t execute.

A BASH script and a cron job

We placed this one-liner script that cobbled together into the following BASH script which we named as “backuptogoogle.sh” and placed it in the folder /usr/local/bin after setting its execution bit on with chmod a+x /usr/local/bin/backuptogoogle.sh

BACKFILE=koha_ghci.$(date +"%Y%m%d_%H%M%S").sql; mysqldump -ukoha_ghci -pASx2xvercbHXzs2dP koha_ghci > /home/l2c2backup/gdrive/$BACKFILE && bzip2 /home/l2c2backup/gdrive/$BACKFILE  && echo Y | drive push --destination DBBACKUPS /home/l2c2backup/gdrive/$BACKFILE.bz2 && printf "To: indradg@l2c2.co.in\nFrom: indradg@gmail.com\nSubject: KOHA_GHCI db backed up on GDrive\n\nSee filename $BACKFILE.bz2 on DBBACKUPS folder on Google Drive of indradg@gmail.com.\n\nBackup synced at $(date +"%Y-%m-%d %H:%M:%S")" | msmtp indradg@l2c2.co.in

We setup a root user cron job with crontab -e and adding the following line and saving it.

@daily /usr/local/bin/backuptogoogle.sh

Note: The @daily shortcut will execute our script exactly at mid-night everyday. If you want to know what are the other useful cronism shortcuts, I suggest you read this useful post by my Koha colleague and good friend D. Ruth Bavousett over here.

Backup automation from command line

If you have been able to follow the instructions by suitably modifying them to your specific settings, you have just achieved backup automation from the command line. Like I said… It’s Easy Peasy!!! ūüėÄ


[1] https://en.wikipedia.org/wiki/Murphy%27s_law

[2] http://www.cloudwards.net/dropbox-vs-google-drive/#features

Adding the item type filter to OPAC masthead search

Earlier today my good friend Vimal Kumar Vazaphally posted a question here about how to add the the item type (mc-itype in koha search speak) filter as a dropdown to the default main masthead search in the Koha OPAC.

Fig. 1: The default OPAC masthead search bar.
Fig. 1: The default OPAC masthead search bar.

RTFM and RTFM often!!! It may save your life!

<rant>The discussion that pursued on the FB group made something very¬†clear. People forget to read the fine manual and when they do, they do not read “between the lines”. Trust me on this one, the Koha user manual is truly a ginormous treasure trove, if you take the pains to read it.</rant>

Ok! here is why I said that people really need to RTFM. The solution to the problem which Vimal shared can easily be extrapolated from this section in the manual –¬†Appendix P: Extending KohaNewest title pulldown¬†(#KohaTrivia it is based on a July 2012 blog post by Nicole C. Engard (Koha’s indefatigable Documentation manager).

“Reading between the lines” of a given solution

Here was the catch: Nicole’s solution pulled out only the newest arrivals of each itemtype, whereas we need it to pull out everything tagged to an itemtype, whether new or old. The second catch was that in Nicole’s example, she added plain HTML to the OpacNav system preference. However we are going to add it via the jquery / javascript route using the opacuserjs system preference.

We will follow Nicole’s example and pull out all item types that we need to populate the drop-down using SQL. However, we will need to escape the additional “backslash” (i.e. \ ) since¬†we need MySQL to execute the query and actually generate the HTML markup that will be rendered on the browser¬†via Javascript. Javascript does not care for¬†arbitrary line breaks ūüėČ [1] and MySQL does not care about unescaped backslashes!

Now, if this sounded confusing, ponder for a moment on this Ajeet joke from yesteryears:

Raabert: Boss? Is kaa kyaa kare boss?
Ajeet: Rawbert! Is pille ko liquid oxygen me daal do. Liquid ise jeene nahi dega, aur oxygen ise marne nahi dega.

Nicole’s SQL based option list generator:
SELECT CONCAT('<option value=\"mc-itype:', itemtype, '\">',description,'</option>') FROM itemtypes

Our modification to the that make the output Javascript friendly:
SELECT CONCAT('<option value=\"mc-itype:', itemtype, '\">',description,'</option> \\') FROM itemtypes

In this case, turns out we have 5 itemtypes defined and we get this following output and we save it as CSV for introduction into our jquery.

<option value=”mc-itype:BBK”>Bengali Books</option>\
<option value=”mc-itype:BOOK”>Books</option>\
<option value=”mc-itype:BVOL”>Bound Volumes</option>\
<option value=”mc-itype:REF”>Reference Books</option>\
<option value=”mc-itype:SER”>Serials</option>\

(Hint: the exact item types is most likely to differ in your case; DO NOT copy-paste this output AS-IS.)

Let’s build the actual jQuery snippet

$( '<select name="limit" id="limitfiler" style="margin-left: 4px;"> \
<option value="">-- filter by item type --</option> \
<option value="mc-itype:BBK">Bengali Books</option> \
<option value="mc-itype:BOOK">Book</option> \
<option value="mc-itype:BVOL">Bound Volumes</option> \
<option value="mc-itype:REF">Reference Books</option> \
<option value="mc-itype:SER">Serials</option> \
</select>' ).insertAfter('#masthead_search');

And plug it into your opacuserjs system preference.

Let test our new drop-down!

Fig 2: Searching for “Subject” as “English language” and no item-type based filtering
Fig 3: Additionally "BOOK" item type filter on.
Fig 3: Additionally “BOOK” item type filter on.
Fig 4: With REF itemtype filtering on.
Fig 4: With REF itemtype filtering on.


[1] “Multi-Line JavaScript¬†Strings” by David Walsh