Running a public z39.50 service with 100% reliability

Over 2 months now, we’ve been running the 5th public z39.50 server from India with 100% host reliability.

Earlier this year on April 20th, we had shared with our readers that FMIRO CCU’s public access z39.50 service run by L2C2 Technologies had achieved 100% host reliability (according to irspy.indexdata.com) and in the process had become the 5th Indian entry into the IRSPY’s global directory of open access z39.50 servers.

We are happy to inform that 2 months on, we have managed to run the server without a single service drop and have continued to maintain our 100% host reliability status. For us this has been a learning exercise and we hope this will encourage more Koha users across India to start opening up their catalogs for copy cataloging by their fellow catalogers.

Host connection reliability

Host connection reliability measures the reliability of the target only in its ability to respond to connections: the display indicates the number of successful connections in the last two months, the total number of attempted connections in that time, and the percentage of successful connections. For example, reliability of 9/15 = 60% indicates that fifteen attempts have been made to connect to the server in the last two months, of which nine (60%) have been successful. [1]

About IRSpy

IRSpy maintains a global registry of information retrieval targets, about 1295 as per recent count (http://irspy.indexdata.com/stats.html), supporting protocols like ANSI/NISO Z39.50 (ISO 23950) and SRU/SRW web services.

About Index Data

Short answer: The guys who publish the Zebra indexing engine and YAZ toolkit and software libraries.

Long answer: Since 1994, Index Data has offered software development, consulting and integration with a focus on search. Our pioneering involvement in open source and open standards dates back to the first release of the YAZ toolkit for Z39.50 in 1995. [2]

References

[1] IRSpy help: info/reliability

[1] About Index Data

Managing patrons with same permanent & local address

Setting your borrower’s local address same as their permanent address with just a single click during patron entry into Koha ILS.

Often folks when unable to find some nifty feature that was present in their erstwhile LMS but not there in Koha, is found to be exclaiming – “But we can’t do that with Koha!”. Well, we have news for you – Koha is open source and that means, you can build / modify the parts that you need or are missing. But you do not know how to do that. Well… *that* is not really Koha’s problem. But fear not, if you are willing and have the aptitude for poking around code you can do it too. There are plenty of open access resources that show you how to do so, just waiting for you to pick up and start working on your skills. After all, it is said:

If you give a hungry man a fish, you feed him for a day, but if you teach him how to fish, you feed him for a lifetime.

Why this post

At L2C2 Technologies, we work with a lot of academic libraries where they need to record both the permanent as well as local address of their students. Koha allows for recording more than a single address for a patron since donkey’s years. If you look at the schema of the borrowers table in the Koha database, you will see that there are fields to record both the primary address as well as alternate address. These two set of fields fit nicely into our permanent and local address requirement.

2017-06-19_01

However, the library staff often complain that it is useless extra work to re-enter the same data over in both set of fields as many users often have one and the same address for both. As a result, we are sometimes asked how to cut down this extra work. In this post, we are going to share one of the ways by which you too can do the same, should you need to do this.

Choosing our tools

All we use are snippets of JavaScript, jquery and css to achieve our objective. All of which go in into the Koha database as part of the IntranetUserJS system preference. We do not touch any template file or change any underlying PERL code. This way our tweak is guaranteed to survive Koha version upgrades without any further effort on our part.

The steps… as easy as 1-2-3

Since we do not want to re-type the same information, the only option is to copy it from first set of fields and that what we do by adding a checkbox HTML form input element. We give this checkbox the id copypermaddress and insert this into the DOM just before the first li element belonging to the parent fieldset memberentry_address on the Add Patron screen.

$('<li><input type="checkbox" name="copypermaddress" id="copypermaddress" value=""><label for="copypermaddress">Same as permanent address:</label><div class="hint">Click to copy permanent address data</div></li>').insertBefore(' #memberentry_address > ol > li:first-child ');

While the above insertion gets us the following screen, it still does not do anything i.e. if you clicked the checkbox, nothing would happen yet. In the next step we cover that.

2017-06-19_02

So we add a listener that will wait for state-change of the checkbox. In plain English, that means it will detect when a user clicks that checkbox and then based on whether it was selected or unchecked, appropriate action would be taken. And that exactly what happens below. The first part goes into action if the checkbox was checked and the part coming after the else kicks in when it is unchecked. In the first instance we copy over the values from the permanent address field and in the second part we undo the copy and blank out the local address fields.

$(document).ready(function(){
$('#copypermaddress').change(function() {
  if(this.checked) {
    $('#B_address').val($('#address').val());
    $('#B_address2').val($('#address2').val());
    $('#B_city').val($('#city').val());
    $('#B_state').val($('#state').val());
    $('#B_zipcode').val($('#zipcode').val());
    $('#B_country').val($('#country').val());
  } else {
    $('#B_address').val('');
    $('#B_address2').val('');
    $('#B_city').val('');
    $('#B_state').val('West Bengal');
    $('#B_zipcode').val('');
    $('#B_country').val('India');
  }
});
});

In the two following screenshots we get to see how exactly this works. In the first one, only the permanent address has been added. While in the second screenshot, we see how the data has been copied over when the checkbox is clicked.

2017-06-19_03

2017-06-19_04

References

  1. .insertBefore()http://api.jquery.com/insertbefore/
  2. :first-child Selectorhttps://api.jquery.com/first-child-selector/
  3. .change()https://api.jquery.com/change/
  4. .val()http://api.jquery.com/val/
  5. Koha DB Schema – http://schema.koha-community.org/master/

JQuery quicktip : Using Patron Attribute fields without double rowed textarea boxes

A JQuery quick tip for Koha ILS

Often we are have clients who want to capture additional data for their patrons. For schools and colleges, this typically includes demographic details, roll numbers, program enrolled etc. The Koha-friendly way to do is by using Extended Patron Attributes aka custom fields for patron data.`

2017-06-17_03-30-18

The thing about these patron attribute fields is that if these are expecting textual data input, Koha uses the textarea HTML element for them. Which is fine, except the textarea elements are sized to 2 rows by default. This something that confuses some users who expect to see an input element instead. So, we decided to adopt a middle way solution – to reduce the textarea element’s rows attribute from 2 to 1.

JQuery to the rescue

As always we turn to trusty jquery which makes this something ridiculously easy thing to do. Here is the code snippet:

$(document).ready(function(){
  if ($('#pat_memberentrygen').length) {
    var tareas = $('textarea[id^=patron_attr_]');
    for (var i=0; i < tareas.length; i++) {
      var t = $(tareas[i]);
      var tarea_reset_rows = t.attr('rows',1);
    }
  }
});

We plug that code into our IntranetUserJS system preference and we are good to go! 🙂 The screenshot below shows the change it brings to the patron data entry UI.

2017-06-17_03-31-01

Code explained

In the first line (i.e. the one starting with if) we check if we are actually on the patron member entry page. Next we create a JS array of only the textarea element on *that* page, *which* have an id that begins with patron_attr_. And finally we loop through that array and change the rows attribute of each textarea fields whose reference is stored in the array.

An installer bug in Koha 17.05 and how it got fixed

Bugs in shipped version of s/w is as old as the history of computing. What matters is how fast and efficiently they are fixed. And that too by a loose-knit team of globally dispersed volunteers.

Earlier this afternoon, my senior colleague and fellow Koha aficianado Dr. Parthasarathi Mukhopadhyay aka PSM posted on FB about his experience with the latest stable release of Koha i.e. 17.05. Only yesterday Mirko Tietgen had released the debian packages of 17.05, and PSM was one of the early few in the country to get his hands dirty trying out the new version. In this case, his hands *did* become slightly dirty due to a bug in the newly re-vamped web installer of Koha. And thus PSM posted “With exciting new features and a serious bug“.

The web installer bug and what it does

There is a rather pesky bug in the web installer of Koha as shipped with 17.05. This bug is not present in Koha version 16.11.x or earlier. It will only manifest under two conditions: (1) when you are working with a fresh Koha 17.05 installation OR (2) when you are attempting to create a new Koha instance (using the sudo koha-create --create-db <instancename>) on a Koha installation upgraded from 16.11.x or earlier.

NOTE: The bug does not affect users who are simply upgrading their existing Koha instance to 17.05 from earlier versions e.g. say 16.05.x series.

What is bug does is that the optional data sets and some mandatory default data sets (sql files located in and under /usr/share/koha/intranet/cgi-bin/installer/data/mysql directory) do not get loaded into the Koha database being setup.

Image courtesy : Dr. P. Mukhapadhyay
Image courtesy : Dr. P. Mukhapadhyay

NOTE: Of course one can load them manually from the commandline, but that is not usually a pretty thing to do and the user needs to know what they are doing.

Long live the bugzilla

The bugzilla is the place where all Koha bugs are tracked and fixed. It is located at https://bugs.koha-community.org/. And on June 7, 2017 Julian Maurice, a Koha dev from BibLibre had reported the issue – “Web installer does not load default data” (Bug id 18741). Within 20 mins, Julian had also submitted the patch (i.e. the code to fix or remove the bug) to the bugzilla. Within 48 hours, Nick Clemens followed by the current Release Manager Jonathan Druart had signed off the patch (i.e. they tested and certified that Julian’s fix works as expected). Four minutes later Jonathan had also “pushed” the patch / fix in the current under development version. And 5 days later Fridolin Somers – the release maintainer for 17.05 series had “pushed” the patch into the current stable with the note that the next version 17.05.1 (to be released in another 10 – 12 days) will carry the fix as a publicly available built-in fix.

18741

For the impatient

Now for some reason, you are one of the impatient types who can’t wait until 17.05.1 is released at the end of this month, you may be able to manually patch your system. I do not recommend it, and if you do what follows next, then one of the following applies to you:

  1. You are impatient
  2. You like to live dangerously
  3. You know what you are doing
  4. All of these

Open the file /usr/share/koha/intranet/cgi-bin/installer/install.pl in a text editor and go to line number 248 which should read as “scalar $query->param('framework') );“. Replace this line with “$query->multi_param('framework') );“. Save and close the file.

Run the web-installer to setup your new Koha 17.05 instance. This time, things should be working OK. The screen that you will see after import of the all the mandatory and all the optional data sets (we chose it that way) will be like as given below.

2017-06-14_23-36-09

Using Google Drive to upload files into directory under Koha OPAC DocumentRoot

Almost a year back, we had shared a post about how to use google drive as a remote backup storage. If you are unfamiliar with it and wish to understand the concepts presented here, we suggest that you first read it and then proceed with this.

Why did we do this?

Our client partner Bangabasi College is putting up a collection of their college questions papers from previous years as PDF files. You can have a glimpse of it by clicking here. The page that is being presented to the visitor to the OPAC is generated using a facility called “Koha as a CMS“. Now here is the thing, while the HTML required to display the scanned question paper PDF files is handled well by the “Koha as a CMS” functionality, it does not handle the part where we need to actually upload the PDF files into Bangabasi College’s Koha instance’s Apache2 DocumentRoot path.

So here is what we did. A normal SCP user account was created on the server hosting Bangabasi college account, into which the PDF files were uploaded into by the library staff users. However, after this, it required manual intervention from us, in order to move these files into the correct DocumentRoot path. We had created a folder QB for the question back under the DocumentRoot as /usr/share/koha/opac/htdocs/bangabasi/qb. And into this QB folder the uploaded PDF files were moved into by us.

But this created one problem, a big one. Our client was dependent on us at all times to move / sync the uploaded files into their final destination the QB folder. Also, if they needed to correct and re-upload a PDF file, they would again need us to help them move the corrected file into the DocumentRoot location. So, basically if we were not available for any reason, we would be holding them up from updating / uploading their own files into their hosted Koha. While our client was happy with how things were happening, to us, this was clearly not at all a desirable situation.

Our client was already using Google Drive and that’s when we figured than instead of simply using the Google Drive for backup, we could also use it to allow our client to do direct, independent uploads – their data in their own hands at all times. And thus this experiment.

Setting it all up

1) We created a folder named “qb” on Bangabasi College’s Google drive.

2) Next on the server within the folder /usr/share/koha/opac/htdocs/bangabasi we ran the command drive init. This asked us to authorize the google drive command line client and fetch the API key, which is what we did after logging in into Bangabasi college library’s gmail account. The API key was copied from the browser and pasted back into the command line. Basically what this did was to create a hidden directory named .gd under /usr/share/koha/opac/htdocs/bangabasi and create a file there called credentials.json. This completed the authentication setup with bangabasi’s google drive account.

3) Lastly, we set up the following cron job as the root user:

*/5 * * * *  cd /usr/share/koha/opac/htdocs/bangabasi && /usr/bin/drive pull -quiet qb

to execute on our server once every 5 minutes.

How it works

Now whenever a library staff user uploads a PDF file into their Google drive’s qb folder, every 5 minutes the cron job on our server will check if there is a new file on the remote Google drive. If there is, then the new file(s) are pulled down automatically onto the /usr/share/koha/opac/htdocs/bangabasi/qb. In this case for instance 125 .pdf files totaling in at about 19 MB were pulled down in ~18 seconds.

Similarly in case of a modification or removal of a file from Bangabasi’s Google drive “qb” it would be similarly synced or removed from the /usr/share/koha/opac/htdocs/custinc/bangabasi/qb folder on our server.

How to reference the files

Since the PDF files are stored under /bangabasi/qb folder under the Koha OPAC’s DocumentRoot i.e. /usr/share/koha/opac/htdocs/, we simply need to refer to the files with the following href attribute value set to /bangabasi/qb/<filename> in our HTML code.

Pros and cons of this approach

First the cons:

1) Google’s AI algorithms gets to read all your PDF files. But since our client is already using Google’s services, this is apparently not a major concern to them. And anyway our client is allowing public dissemination of these files, so Google is going to read it one way or another.

2) If the library staff user accidentally or maliciously deletes the Google drive folder or files in it, then the very next run of the pull command will remove the same off our server. But same would have been the case if the staff users had root / sudo access to the Koha DocumentRoot (i.e. /usr/share/koha/opac/htdocs). In fact, in the latter case, they can even rm -rf the entire server, removing *everything* from it.

The Pros

1) You can now allow your staff users to freely upload the processed files without having to give everyone the access to the actual Koha server’s filesystem. The chances of accidental or malicious deletion of files off the Koha server is largely minimized.

2) The speed! Simply put uploading files to Google drive is usually faster then directly uploading to the Koha server hosted on the Internet. The transfers between Google servers and the hosted Koha server also happens at a high rate of transfer.

3) You basically have *two* online copies of your PDF files – (a) on the Google Drive folder and (b) on the Koha server, which is good in terms of redundancy.