Adding autocomplete support to MARC21 260 / 264 (imprint) fields in Koha

Adding auto-complete feature to Koha’s MARC21 260 / 264 field (Imprint – place, name of publisher, distributor etc) as a cataloging aid.

As per LC’s AACR2 (as well as RDA) instructions, the imprint information as captured in MARC21 field 260 (AACR2) and 264 (RDA) should be *transcribed* and not *recorded* using the principle – “Take What You See and Accept What You Get” [1]. As a result, the 260$a (place) and 260$b (publisher, distributor etc) are usually not handled as fields whose values are controlled using authorized values.

Up until 2001, the 260 field was a NR field. It was made repeatable to accomodate frequent publisher name changes.

Since the 260$a and 260$b are not usually guided by authorized values, it is noted (at least in the Indian sub continent context) that catalogers often make typographical errors while transcribing the data, e.g. “Pearson Education” may be inadvertantly added as “Peerson Education” or as even as “Pearshon education”, while “Kolkata” may have been entered both as “Kolkata” as well as “Kolkatta” or even as “Kolkhata”. Without authority control of the field, this cataloging quality check is often overlooked. Errors like this often end up affecting the result of advanced searches or custom SQL reports.

Luckily for us, Koha ILS uses jquery extensively while (via jquery-ui) provides for nice autocomplete widgets, like the ones we see in action when we type in part of the borrower’s name in patron search (checkout) or in the authoritiy headings search, where entering 3 characters triggers the AJAX based lookup with the option to select one of the offered list *OR* to type in our own.

AJAX stands for “Asynchronous Javascript and XML”. In simple terms it encompasses a set of web development technique that allows us to fetch and load data from a remote server into our currently open page, without requiring us to refresh / reload the page. [2]

Recently a client requested that we offer them a way to look up publisher names and place names (for field 260) already entered into their Koha instance, without having to type it all in every time. For example in their database they already had the following publisher names entered – “Pearson Education, Prentice Hall India, PacktPub, Press Trust of India” etc. Now they if they encountered an item that was from these publishers they should be able to pull up a list just by entering “P” into the 260$b field and then be able to select the one applicable. And if they encountered a publisher name say “Penguin Books”, they should be able to type it in as well.

Koha 16.11 ships with 3 (three) different ysearch.pl scripts that show us how to achieve this. You can find out which ones these are by using the command `locate ysearch.pl`. NOTE: You may be required to run `sudo updatedb` once before locate finds the files. For our requirement we modeled our script which we’ll call 260search.pl on /usr/share/koha/intranet/cgi-bin/cataloguing/ysearch.pl. You can grab a copy of 260search.pl from L2C2’s github repo here [3]. The script returns a JSON based result set if results matching your input is found.

Remember that **every** script that Koha executes, needs it executable bit set, and so does this one. Therefore, do *not* forget to set the executable bit for the script with `sudo chmod a+x /usr/share/koha/intranet/cgi-bin/cataloguing/260search.pl` before you proceed to the next step.

Step #2 : Enabling the fields

With the script in place, we now need to turn to IntranetUserJS system preference and enter the following jquery snippet to enable autocomplete in 260$a and 260$b :

$(document).ready(function(){
  $( '[id^="tag_260_subfield_a"]' ).autocomplete({
    source: function(request, response) {
      $.ajax({
        url: "/cgi-bin/koha/cataloguing/260search.pl",
        dataType: "json",
        data: {
          term: request.term,
          table: "biblioitems",
          field: "place"
        },
        success: function(data) {
          response( $.map( data, function( item ) {
            return {
              label: item.fieldvalue,
              value: item.fieldvalue
            }
          }));
        }
      });
    },
    minLength: 1,
  });
  $( '[id^="tag_260_subfield_b"]' ).autocomplete({
    source: function(request, response) {
      $.ajax({
        url: "/cgi-bin/koha/cataloguing/260search.pl",
        dataType: "json",
        data: {
          term: request.term,
          table: "biblioitems",
          field: "publishercode"
        },
        success: function(data) {
          response( $.map( data, function( item ) {
            return {
              label: item.fieldvalue,
              value: item.fieldvalue
            }
          }));
        }
      });
    },
    minLength: 1,
  });
});

A video of autocomplete in action

Conclusion

By tweaking the 260search.pl script or even by completely re-writing it to use the various search functions shipped by Koha inside its /usr/share/koha/lib directory on a .deb package based installation, you can do so much more than possible with this simple hack. Happy hacking! ūüôā [4]

References:

[1] http://www.loc.gov/catworkshop/RDA%20training%20materials/LC%20RDA%20Training/Module1IntroManifestItemsSept12.doc

[2] https://developer.mozilla.org/en-US/docs/AJAX/Getting_Started

[3] https://gist.github.com/l2c2technologies/7d0449dcb80c90880381ef4571003d1d

[4] http://catb.org/jargon/html/H/hack.html

JQuery tips for Koha : Adding easy to use indicator picklists

Adding picklists for selecting indicators for MARC tags used in Koha’s cataloging worksheets.

During data audits of users’ MARC21 data, quite frequently we find that most, if not all, records are often without any form of use of indicators. Trained library professionals often give a sheepish grin when asked why they didn’t add them while cataloging the documents. ūüėČ But trained librarians are not the only ones who work with Library systems like Koha. There are many people who find themselves working in a library without a formal training or sufficient theoretical background on MARC21. Generally speaking reasons for not adding the indicators range from:

  • Lack of practise – thus unsure of the correct indicator to use.
  • Lack of awareness – i.e. untrained people with a very basic knowledge of cataloging
  • Lack of user-friendly mechanism to input indicators
  • And lastly – sheer laziness

Now, about the last one we can’t do anything about, however the rest of the reasons might use a bit of leg-up! So here goes the newest tutorial on how to add easy-to-use picklists to help us correctly populate the indicators.

According to the Design Principles of MARC21, indicators form a part of the family of content designators [1]. As defined, an indicator is :

A data element associated with a data field that supplies additional information about the field. An indicator may be any ASCII lowercase alphabetic, numeric, or blank.

For this tutorial we will focus on MARC21 bibliographic data fields 100 and 110 i.e. Main Entry Personal Name and Main Entry Corporate Name respectively. We will not touch the Koha template files at all, rather as per the global best practice for Koha ILS, we will utilize only JQuery (JavaScript) and HTML via the Koha system preference IntranetUserJS.

Step #1 – Finding out the DOM nodes

We will start by going to Home > Cataloging > Add MARC record in Koha and select the framework we want to work on. In this case we chose to work with the “Default framework” that is shipped with Koha. We used Google Chrome’s Developer Tools Inspect option [2] to find out what is the id of the selector (DOM node) we need for Main Entry Personal Name.

Since we need space to setup the picklist we chose to use the free space available on the div that displays information about the field that follows immediate after it. As you can see in the image below that div has an id identifying it, which is very good for us, since it makes selecting the DOM node absolutely painless.

blog_01

It should noted that when Koha renders the cataloging interface, it suffixes the HTML element IDs with a random number (one for each new tag). In this case, the id was div_indicator_tag_100_838390 where “838390” is the random suffix number. We needed to latch on to the first part i.e. div_indicator_tag_100.

Step #2 – Let the JQuery magic work

We have to add the select dropdown picklists right after the text on the div_indicator_tag_XXX DIVs. The value we will use for the indicators will come from here and here respectively.

$(document).ready(function(){
if ( $("#cat_addbiblio") ) {	// only while adding biblios
  $('div[id^="div_indicator_tag_100"]').append(' <label for="tag_100_indicators">Apply Ind1, Ind2</label> <select id="tag_100_indicators"><option>-Select-</option><option value="1">1 - Surname</option><option value="0">0 - Forename</option><option value="3">3 - Family name</option></select>');
  $('div[id^="div_indicator_tag_110"]').append(' <label for="tag_110_indicators">Apply Ind1, Ind2</label> <select id="tag_110_indicators"><option>-Select-</option><option value="2">2 - Name in direct order</option><option value="0">0 - Inverted name</option><option value="1">1 - Jurisdiction name</option></select>'); };   // end if
});
});

blog_02

While that added the picklists, we still have to add the actual logic that will allow the indicators to be populated on selecting from the list. Again we will turn to JQuery for the following snippet:

$(document).ready(function(){
  $('#tag_100_indicators').click(function(){
    var what_clicked_100 = $('#tag_100_indicators').val();
    if ( !isNaN(what_clicked_100) ) {
      $('input[name^="tag_100_indicator1"]').val(what_clicked_100);
      $('input[name^="tag_100_indicator2"]').val("#");
    } else {
      $('input[name^="tag_100_indicator1"]').val("");
      $('input[name^="tag_100_indicator2"]').val("");
    }
  });
  $('#tag_110_indicators').click(function(){
    var what_clicked_110 = $('#tag_110_indicators').val();
    if ( !isNaN(what_clicked_110) ) {
      $('input[name^="tag_110_indicator1"]').val(what_clicked_110);
      $('input[name^="tag_110_indicator2"]').val("#");
    } else {
      $('input[name^="tag_110_indicator1"]').val("");
      $('input[name^="tag_110_indicator2"]').val("");
    }
  });
});

The code above is listening to see when we click and select a value from the picklists i.e. when we trigger a click JavaScript event. Next it checks if we had selected a real value OR whether we had just “clicked” on the placeholder “-Select-” option that no value. And lastly based on what we had selected it sets the ind1 and ind2 values according.

blog_04

Conclusion

In this manner we can add easy-to-use picklists for indicators. Since it is now only a matter of selecting from the available values, it also reduces significantly the scope for typographical errors during data entry into the indicator boxes. Before we leave for today, do note that the second code listing may be better handled as a JavaScript function to which the references are passed to by a handler hook. Doing so would make for a cleaner and leaner implementation of this concept especially if you are planning to set it up for all the non control MARC21 fields you use. Also, you may wish to implement the selected dropdown value check using something other than IsNan [3].

References

[1] “MARC 21 Specifications for Record Structure, Character Sets, and Exchange Media – RECORD STRUCTURE (2000)” https://www.loc.gov/marc/specifications/specrecstruc.html

[2] “Chrome DevTools” – https://developers.google.com/web/tools/chrome-devtools/

[3] isNaN() – https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/isNaN

Koha Quick tip: Removing the dot from salutation like “Ms.” after a bulk patron import

A quick-n-dirty way to remove the trailing dot from salutation field after a bulk patron import into Koha.

Earlier today the deputy librarian at our client partner Gouri Devi Institute of Medical Sciences and Hospital (GIMSH) called us wanting to figure out how to fix a small, but crucial problem. The employee list from GIMSH’s HR department was used to create the patron bulk import CSV [1] file. All 143 records got imported without a glitch and each individual records looked perfectly OK when viewing them. However, when she tried to edit these patron records, she noticed that the salutation (e.g. Mr, Mrs, Ms etc.) would not show up in the salutation drop-down during edit! ūüôĀ

missing salutation
Missing salutation even though salutation was entered into the database

Explanation

About 141 out of the total 143 records had a period.” after the salutation and two didn’t. These two records which did not have the period at the end of salutation were displaying the stored value correctly in the drop-down, rather than showing as being blank. As it happens, Koha uses the BorrowersTitles system preference to store the options to display. The default options are Mr|Mrs|Miss|Ms. As you can see, none of these have a period at their end. So, while the salutation field value was stored in GIMSH’s borrowers table, it was not being displayed during the edit as the value didn’t match with the values stored in BorrowersTitles syspref.

The “Fix”

***WARNING*** The following step requires you to directly execute a SQL on the Koha database. This is usually not recommended, but some times, it is required to quickly fix a problem.

Using the TRIM() string function of MySQL in tandem with the specifier trailing [2] we can take out the “.(period) at the end of the salutation from all the records in the borrowers table in the following manner:

UPDATE `borrowers` SET `title` = TRIM(TRAILING '.' FROM `title`);
salutation drop down is working now
salutation drop down is showing the stored value correctly now

References

[1] See the entry Comma separated values from Wikipedia.

[2] String Functions in MySQL 5.5 – http://dev.mysql.com/doc/refman/5.5/en/string-functions.html#function_trim

[3] Sample patron data file – patron_import.csv NB. Those of you who wish to study how the HR data was prepared for import into Koha, can look here for a sample CSV file with dummy data (do not worry about privacy as the data is fake).

Changing languages in the drop-down for Google Transliterate API

Choose the list of languages to transliterate on the Koha OPAC

This tends to be frequent question that we often end up fielding from colleagues and customers. But before we get into this, please remember that this feature is probably at the end of it shelf-life. There are two reasons for this – the Transliterate API that this feature depends on was deprecated many, many years back, and secondly how badly this API was put together – it fetches the necessary JavaScript library from Google’s servers over HTTP! So, if you are using HTTPS on your Koha OPAC, you can kiss this feature good bye right now! Being deprecated there is almost zero chance that Big G will do anything to change that.

Life sucks right?? ūüėČ But as long as it is still there, here’s “one for the road”! ūüėÄ So, read on!

Turning on Google Transliteration

googleindic

It all starts with setting GoogleIndicTransliteration system preference to “Show” from the default of “Don’t show”.

By default that provides us with this short language drop-down:
googlist

For users from India, a country with 22 official language at the last count, that default list is woefully too little, too less. The good news is that Google’s Transliteration API supports several dozens of languages. Which ones? Well you need to dig around LanguageCode and SupportedDestinationLanguages enumerations in the documentation.

Show me the money!

The code to support the function is loaded via googleindictransliteration.js file that resides under /usr/share/koha/opac/htdocs/opac-tmpl/bootstrap/js/ on a Debian package based installation.

The screen grab below shows exactly that changes we needed to make in order to show on Hindi and Bengali as the list of languages, *and* with Bengali as default language option on the select drop-down.

googchanger

So how did I know that I needed to use “bn” for Bengali? Well, that LanguageCode enum list I’ve linked to above. Simple really!

One more fact about this tip that will bug a few

Change(s) to the LanguagesCode enum like this within the googleindictransliteration.js file would cause *all* instances on a multi-tenanted Koha installation to display the same list of languages. So, if you need different sets of languages lists in different OPACs, you are probably out of luck.

Customizing Koha’s MARC21 frameworks? Know the rules or get help!

Either you know what you are doing or take time to learn or invest in quality support. Fail on all three counts and you are quite literally asking for an operational nightmare.

Recently a young colleague Sri Ashkar K. from Thiruvananthapuram, Kerala (India) ran into a problem. He works as a librarian with Mathrubhumi, a major media house from Kerala. Specifically he needed to have a LMS solution to efficiently manage their collection of entertainment (mostly movies) related CDs and DVDs. For him the LMS was hosted Koha. However when he tried to issue an item i.e. a movie CD, he was stumped by this error every time:

Software error
No branchcode argument passed to koha::Calender->new at /var/koha_all/mathrubhumi/lib/C4/Circulation.pm line 3558

Being on a hosted Koha platform, he approached his service provider for support. He shared with them all the relevant screenshots leading to error detailed above.

The provider’s tech support could not identify the issue and instead informed him that they could perform checkouts (issue) without any errors. As Ashkar persisted, the service provider’s support desk asked him to provide remote desktop sharing using Teamviewer so that they could see “his problem” in action. Installing Teamviewer needed clearance from his IT department which required time and thus Ashkar’s checkout problem continued to linger. Finally about 10 days back he posted about it on the official Facebook page of Koha Library System Project, asking for suggestions to resolve it.

The first flag was raised by fellow Koha dev Mark Tompsett when he asked:

“/var/koha_all/mathrubhumi/lib/C4/Circulation.pm” — That is not a standard installation path. How did you install this? And what version?

Ashkar replied that since the software was hosted, he did not know the installation details. This got my attention! If he was on hosted Koha, why was he turning to the community for support? What was his service provider doing in the first place? I decided to find out more. That’s when I discovered the details of his situation. Desperate for help, he provided me with superlibrarian access to his hosted Koha’s staff client interface. I logged in and found that the problem was very real. In fact, I found out a few rather *disturbing* things.

The hosted Mathrubhumi Koha instance wasn’t running on the stable version (which is 16.05.05 at the time of writing) of Koha ILS. In fact, it was running on an unstable development version (at the time of this writing it was using Koha 16.0600023). Development versions are not GA releases and are *never* meant for production use, they are meant for use by testers and developers. And secondly, I could not do a MARC21 export for his bibliographic data.

That set alarm bells ringing in my head and so with Ashkar’s approval, I created a backup of his Koha database and installed the backup¬†on L2C2’s test server running the latest stable 16.05.x version.

The first clear indication of what was wrong came soon after running sudo koha-rebuild-zebra -v -f mathrubhumi successfully without any error. A wildcard search from both OPAC as well as the staff client failed to return a single result, even though the Zebra indexer and output logs showed no error. However, it was possible to access a record by directly accessing it by biblionumber.

Running the “MARC Bibliographic framework test” to check the MARC structure provided the answer. Sure enough there were two major errors as shown below:

homebranch NOT mapped the items.homebranch field MUST :

  • be mapped to a MARC subfield,
  • the corresponding subfield MUST have “Authorized value” set to “branches”
holdingbranch NOT mapped the items.holdingbranch field MUST :

  • be mapped to a MARC subfield,
  • the corresponding subfield MUST have “Authorized value” set to “branches”

The question now was to identify *which* MARC21 framework since he had three (03) of them.

marcframe1
Ashkar’s MARC21 frameworks

Checking the “MOVIES” framework, it was found that both 952 $a (homebranch) and $b (holdingbranch) were set to ignore¬†in the Managed in tab¬†dropdown. This explained the error displayed by “MARC Bibliographic framework test”. To know more about the the 952 MARC21 field in Koha, please read Holdings data fields (9xx) from the Koha community wiki.

The Fix

It was a simple matter of setting both 952 $a and $b to “items(10)” for the option Managed in tab. This took care of the “MARC Bibliographic framework test” error.

However, that was only the first part of the solution. Except two, none of his other 23 bibliographic records had their homebranch and holdingbranch defined. It was time for a batch item modification from the Tools page (Home > Tools). This has been covered in details in an earlier blog post – “Koha‚Äôs MARC modification templates comes to the rescue“, so if the topic sounds unfamiliar, it is suggested that you read that post first.

In order to find out all the barcodes that needed to be used to update the records, the following SQL report was used:

SELECT items.barcode 
  FROM items 
  LEFT JOIN biblioitems ON (items.biblioitemnumber=biblioitems.biblioitemnumber) 
  LEFT JOIN biblio ON (biblioitems.biblionumber=biblio.biblionumber);

With the list of barcodes in hand, it was time for the final steps:

  1. Load up barcodes for the records to be bulk modified
    marcframe4
  2. Select the two fields that we wanted to update – homebranch and holdingbranch
    marcframe2
  3. Select the actual branch option for both and click on Save
    marcframe3
And we were done! ūüôā

The Explanation

Understanding the error is quite simple if you know how circulation works inside Koha. A checkout operation needs to know a few basic things – (a) who owns the item; (b) where is the item presently located; (c) what to set as the issue and due dates and (d) who is taking it. Since the items attached to bibliographic records created using the MOVIES MARC21 framework did not have their homebranch and holdingbranch defined, at the time of checkout, as Koha tried to set the issue date and calculate the due date, using the date functions from the Koha::Calender object, it failed to do so. That’s what gave Ashkar his error and prevented him from checking out an item.

This still left one question unanswered – why did Ashkar’s hosting provider keep insisting that everything was working OK at their end and wanted him to provide them with Teamviewer access instead. My best guess is they were checking out the system using only the MARC21 frameworks which *they* had shipped i.e. default and fast add (FA) frameworks. Since records generated using these two frameworks (quite correctly) had 952 $a and $b set, none of these triggered Ashkar’s error during checkout. They certainly did not need Teamviewer access, the error in Ashkar’s framework should have been easily detected and quickly fixed. In fact, it took less than 3 mins to take care of it. But they failed, which is why it is important to either invest in your own skill development (read RTFM) OR invest in quality support.

“If you pay peanuts, you get monkeys” – James Goldsmith

Moral of the story: If you work with service providers whose front line tech-support is staffed with inexperienced people, be prepared for the long haul and self support yourself. Caveat Emptor!

Setting specific ‘lockdown’ of Koha’s system preference options

Individual ‘lockdown’ of Koha’s system preference settings using a bit of jQuery and CSS.

The current stable version of Koha 16.05.4 ships with some 548 system preferences. These are stored in the ‘systempreferences‘ table in the database. Inside the Koha staff client, they are accessed by visiting the Home >¬†Administration > Global system preferences menu link. If this is the first time you are hearing about system preferences in Koha or you are not deeply familiar with them, it is suggested¬†that you familiarize yourself with this chapter section of the Koha 16.05 manual.

The objective here is not prevent someone’s use of Free Software, but rather to ensure they are only committing pre-validated changes to the production server. Changes have consequences and whoever makes them should be fully aware of the impact of these changes.

While Koha’s per user access control feature does provide a way to allow or withhold an user’s access to view / edit the system preferences, it does so with an “all or none” approach i.e. either the user has access to *all* the system preferences or none. This lack of access control granularity can prove to be slightly undesirable under certain circumstances. For example, you want that certain settings should *not* be changed or not changed accidentally or not changed without first testing and validating the change on a staging system. In our case, on our managed systems we do not want the designated superlibrarian user at the client’s end to make changes to say the opacheader, opaccredits, OPACUserJS, OPACUserCSS, IntranetUserJS, IntranetUserCSS and OpacNavBottom system preferences on the production VM, without first testing the changes on a test VM.

The implementation

We implemented the setting specific ‘lockdown’ in the system preference settings using a bit of jQuery and CSS.

Step #1

First we identified the selectors we needed in order to enable the lockdown. The easiest (and recommended) way to do this is to ‘inspect‘ your target (i.e. ones you want to lock down) DOM elements on the System Preference administration page(s). As mentioned before we want to lockdown the following sysprefs: IntranetUserJS, IntranetUserCSS, OPACUserJS, OPACUserCSS, opacheader, opaccredits, OpacNavBottom. Looking at the DOM made it clear that we needed to work with the following id based selectors – pref_IntranetUserJS, pref_IntranetUserCSS, pref_OPACUserJS, pref_OPACUserCSS, pref_opacheader, pref_opaccredits and pref_OpacNavBottom respectively.

Step #2

The next step was to decide how tight we want to make the ‘lockdown’. We did not want it airtight, so here is what we did. We left the IntranetUserJS and IntranetUserCSS only disabled, but the rest we removed their respective textarea elements from the loaded DOM. Had we wanted things really tight, we could have do that same for the two disabled ones.

lockdown_01
Click on the image to view it at full size

Note: Should you use .remove() on all the elements above instead of setting the attribute to disabled, then the only way to get access to them would be by directly editing the IntranetUserJS syspref’s value in the database.

Step #3

We will also add hints to the label so that users can understand why they are not able to access the setting. See the green arrow on the left above for the code. Once done, save the IntranetUserJS syspref and exit. We are done.

Checking our work so far

Let us search for the OPACUserCSS system preference. We will see (as given below) that the editable textarea element is no longer present. Note the “Click to collapse” text without the editable textarea element holding the actual setting value. Also there is now a small lock icon against the label with the text explaining why the edit window is not present.
lockdown_00a

Unlocking the ‘lockdown’

What we have implemented so far will prevent someone with system preference edit permission from¬†accidentally editing the ‘locked’ system preferences from the Admin page. In order to “unlock“, first we need to access the¬†IntranetUserJS syspref which we had only disabled in this case.

Unlocking – Step #1

Right click on the IntranetUserJS syspref and select Inspect

lockdown_00b
If you did it correctly then element with id as pref_IntranetUserJS with be highlighted. Note the disabled attribute which is pointed to with the red arrow in the screenshot below:

lockdown_00c

Unlocking – Step #2

Double-click to select the disabled="disabled" attribute of the textarea element.

lockdown_00d

Unlocking – Step#3

Delete the disabled attribute, the textarea element should now look like this.

lockdown_00e

Unlocking – Step #4

Close the Developer tools window, but *do not* move out of the IntranetUserJS syspref yet! We still have work to do. You will see that the textarea is no longer disabled and is now open for editing. In order to remove the ‘lockdown’ on our system preferences, we need to comment out the jQuery code we had added earlier. We do this simply by wrapping the relevant code inside a C style /* [...] */ comment block. See the green arrows in the image below:

lockdown_00f
Click on the image to view it at full size

Unlocking – Step#5

Save the IntranetUserJS syspref and now try to access the OPACUserCSS syspref again. As you can see from the image below, the system preference is no longer locked and now open for editing.

lockdown_00g

Re-locking

Once we are done with making necessary changes we may wish to again ‘lockdown‘ the settings. We simply need to go back and edit the IntranetUserJS syspref and un-comment the locking code by removing the C style comment markers. Easy Peasy!

Switching content language on Koha OPAC with user interface locale switching

How to display custom content in the user’s own language on the OPAC.

Last week Mr. Ahmad Nasser¬†from¬†the¬†Future University of Egypt reached out for a bit of help. The Koha OPAC provides certain sections / blocks on the OPAC e.g. OpacNav, OpacNavBottom, OpacNavBottom and OpacMainUserBlock etc. where libraries can add custom content / instructions / links / widgets to aid¬†and inform¬†their users¬†better about their library and its services. Nasser’s case was interesting since he needed to cater to a bi-lingual readership where some users may prefer to read the information presented in Arabic rather than in English.

Development of language was the greatest break through of human technology. It helps us to communicate. But the same language when it is not the same for a group of people can create problem. How do a Bengali communicate with a Tamil, a Malayali with an Assamese when they do not understand the others’ language and they do not happen to speak English the global lingua-franca?¬†Sort of like this line from¬†this famous song pictured¬†on Raj Kapoor in his super hit 1955 super hit – ¬†Shree 420 that goes “mera joota hain Japani, yeh patloon engleesthani, saar pe topi russi….” (‘My shoes are Japanese, these pants are from England, the red hat on my head is Russian…’) – indeed how do we cater to this diversity!

trans_raj
Still image copyright: Shemaroo Videos

When it comes to a software like say Koha, the answer lies in localization Рa process which allows a software to present information to its users in their own language of choice.

Koha’s user interface (UI) locale switching allows for users to switch the user interface language e.g. from default English to say Chinese (Taiwanese) or Hindi (India) as long as the language pack exists for Koha. However, this switching is not designed to tackle switching the language of the content in these custom blocks which we mentioned in the previous paragraphs.

Nasser wanted a way to display the content of say OpacMainUserBlock in Arabic when the user switched the user interface to Arabic and back in English when another user wanted to use the default language (i.e. english). This post highlights one ways by which Koha administrators / librarians can let their users a way to see the content in the language of their choice rather than an arbitrary default language or even worse a mish-mash of two or more languages.

This case is relevant to libraries in India as well, with our multitude of languages Р22 official languages at the last count РHow do we serve content in English to our top 10 Р15% population, at the same time how do we address the rest of our population who are literate in their own language, all who may be some day be using Koha. Our records may be in the local regional language, but how about the added custom content? This solution works by looking at present locale[1] selected the user on the Koha OPAC.

The Solution

As I’ve mentioned this is not the only way to solve this problem. But it is probably the simplest *and*¬†the cleanest one. And it does so by using three things:

  • The selected locale language of the Koha OPAC
  • One line of custom CSS placed into OpacUserCSS system preference
  • Exactly 3 lines of Javascript added to OpacUserJS system preference

In this blog post, we’re¬†only looking at managing the OpacMainUserBlock – the central block on the OPAC, but the solution can be applied to every other blocks that access custom HTML markup – including OpacHeader, OpacCredits ¬†as well as on “Koha as CMS” pages etc.

If you have never setup multiple language support on Koha, you can read up – “Installation of additional languages for OPAC and INTRANET staff client” and familiarize yourself first.

The Demo

I’ve set up a multiple language demo Koha installation with the following languages aside from the default English:

(a) Arabic (ar-Arab)
(b) Czech (cs-CZ)
(c) German (de-DE)
(d) Hindi (hi)
(e) Slovak (sk-SK)
(f) Chinese (Taiwanese: zh-Hans-TW)

The URL is https://demo-opac.l2c2academy.co.in/cgi-bin/koha/opac-main.pl¬†where you can see this working in action. As you change the selected language and right click to see source code of the page, you will notice that the “lang” attribute of the “html” element changes to the language codes given inside the parentheses above. Below is a snapshot of 6 of the 7 languages as rendered in the HTML source once you change the language.

trans_all_src

Hint: That lang attribute is our locale identifier and it changes every time we select a different language. Try it out on the demo and see it for yourself.

Since this depends on using CSS to toggle the visibility of our local language content we are going to define a disabled class in our OpacUserCSS system preference like this:

/* disabled class */

.disabled {
   display: none;
}

In this example we will use a <div> element like given below:

<div class="en disabled">

 your local language content goes here

</div>

However we can use this technique on *any* HTML element whose visibility can be toggled by accessing its display CSS property [2]. We will need to add two extra classes to our HTML element – the first one class will be named as our lang attribute and the second class will be the disabled class. We’ll need to repeat this definition for each language that we want to deal with.

For your reference here is a listing of my OpacMainUserBlock for this example, please download and study it in order to understand the process better.

NOTE: For this example, I’ve selected a single paragraph from the entry on “Wikipedia” from the Arabic, Czech, German, English, Hindi, Slovak and Chinese Wikipedia.

Once, our custom HTML is in place, we will need a way to toggle their visibility (CSS display property) based on the user selected language locale via the lang attribute. For this we’ll use the following JQuery snippet in our OpacUserJS system preference:

$(document).ready(function() {

  var selectedlang = $('html')[0].lang;

  var buildClassString = ".".concat(selectedlang);

  $(buildClassString).removeClass('disabled');

});

The first line finds out the lang attribute of our <html> element. In the next line we build a string to hold the selector for the class (since classes are notified in JQuery selectors using a dot in front of the class name). And finally, in the third line, we remove the disabled class from the content whose language class matches the lang attribute. By removing the class from the element, we automatically cause its display CSS property to become visible.

What really happens behind the scenes

The custom HTML markup is first loaded with its visibility turned off. Once the page is loaded the document.ready() JQuery call looks up the current language selected and removes the display: none; CSS style from the element by removing the disabled class. As a result, the element and the content it is designated to display becomes visible. This whole cycle is repeated when we select another language. Thus, we are now able to provide our users with custom HTML markup and content based on the language they selected.

Reference

[1] “Locale (computer software) – Wikipedia, the free encyclopedia

[2] “CSS/Properties/display – W3C Wiki

Quick tip: Add “barcode” lookup to your OPAC’s search index selection downdown

If you wish to add an option to the OPAC search dropdown e.g. “Barcode”, you can achieve it with a single line of jquery. There is absolutely NO need to edit masthead.inc as suggested in BUG #8302 – http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=8302.This jquery one-liner does the job rather well, simply place it inside your OPACUserJS system preference.

$("select[name='idx']").append("<option value='bc'>Barcode</option>");

If you wish to know how I set the value to bc, I suggest you take a dive into this file ccl.properties in your Koha installation.

P.S. You can also search as bc: <your barcode number> in the search box. That works too, even without adding the option to the drop-down as you are directly passing a CCL query option back to the Koha search module.

This post is based on an earlier Facebook post published here https://www.facebook.com/l2c2technologies/posts/775648639191038 on Feb 23, 2015.

The 5-Minute Series: JavaScript or CSS – what is better at hiding the ‘No cover image available’ markup?

How to efficiently remove the ‘No cover image available’ placeholder using CSS rather than Jquery, when AmazonCoverImages or GoogleJackets can’t retrieve a cover image for your displayed title.

We really like our OPACs to display the cover images of the books and journals which we have often so painstakingly cataloged. Out of the box, Koha allows us to fetch and display book covers from several sources, both local and from data providers over the Internet. The commonest of these being (a) AmazonCoverImages and (b) GoogleJackets. While these two settings (either or both) will work for a large number of books, for users in India, there are also a lot that these two sources do not offer. Especially books in Indian languages or simply books printed without an ISBN (there are a lot of these in India).

When no image is found Koha displays the text “No cover image available” as a placeholder. A lot people would rather not see this. The Koha JQuery Library on the Koha Community wiki offers the following jQuery based approach:

$(document).ready(function(){
    $('.no-image').remove();
});

You simply place this snippet into the OPACUserJS system preference and hey Presto! these pesky “No cover image available” displays are history. Well, yes and no! Yes it works, and “no”, this may not prevent that text to be displayed, at least once. Why? well how about a slow PC and an even slower internet connection? Either one of the two or a combination of both will usually ensure that you get to take a look “No cover image available” before they are “removed” from the displayed page.

The simpler thing IMHO, is to take the Cascading Style Sheet (CSS) route, which as simple as placing the following one-liner into your OPACUserCSS system preference:

.no-image { display: none; }

no-image-removed

Not only will you *never* get to see the “No cover image available” markup anymore, no matter how slow your PC / Internet connection is, this is also more efficient, since rather than use jquery to first select and then .remove() a DOM element with a particular style class attribute, we simply re-define that they shouldn’t be displayed at all. The browser DOM does all the optimization in this approach.

Koha spine label is not printing the “/” in your call numbers? Here is why.

If you have defined DDC as your classification source and have a “/” in your Koha item call number, it is not going to be displayed when generating spine labels. If you are in hurry or you are aware of the segmentation mark, you can jump straight to the section The Answer.

The “Problem”

Earlier in the day a fellow user Dyuti Samanta came up with a question :

“Sir, I’m trying print spine labels from Koha. However I see that Koha does not print the the front slash (“/”) in my itemcallnumber, even though the same is recorded in my MARC record and is otherwise displayed by Koha elsewhere. For example, the “CHA / L” in “025.4 CHA / L” is being printed as “CHAL”. So where is the problem, how can I fix it?”

The Background

Dyuti’s question made me smile. And instead of immediately telling him about the “why” I pointed him to a comment left by Anamika Das on Vimal Kumar Vazaphally‘s blog post – “Spine label creation” saying “You are not alone with that question! ;-)”.

A call number typically consists of Dewey class number + book number i.e. Cutter number (or some other means of alphabetic arrangement). The frontslash “/” is deemed as a segmentation mark (ala prime mark of in C-I-P records) in the universe of Dewey Decimal Classification[1]. Up until DDC 22 published in 2003 [2], the slash or the prime mark was used to mark the start of every standard subdivision (notation from Table 1) as well as the end of the Abridged number. However, this rule changed from DDC 22 onwards (September 1, 2005 to be exact) and remains extant for the current edition i.e. DDC 23 published in 2011. The new rule has been that only *one* single segmentation mark may be used and that too only for marking the¬†end of the abridged number [3].

A prior and post example straight from Library of Congress

Before DDC 22 – 551.21/09797/84

DDC 22 onward – 551.210979/84

Further, if you follow LC and OCLC norms, while Dewey class number in MARC21 field 082 can definitely have (since Sep 1, 2005) a *single* segmentation mark, the call number should never have one. With this background story in place let’s look at Koha to understand what is happening here.

The Answer

The particular Koha code that has taken out the the slash from both Dyuti and Anamika’s call numbers resides in C4::Labels::Label Perl module which is located at /usr/share/koha/lib/C4/Labels/Label.pm. Even more specifically, it is the _split_ddcn subroutine in Label.pm that is taking out the “/“. As we have already noted, under LC rules, call numbers (unlike Dewey class numbers in 082) can’t have segmentation marks. Thus it takes out any “/” embedded in your call number while processing the spine label. Very specifically, it is this line in the _split_ddcn subroutine:¬†s/\///g; # in theory we should be able to simply remove all segmentation markers and arrive at the correct call number that does it. And just why does _split_ddcn get invoked? Well, it is because of something you did during cataloging, remember¬†that¬†you had recorded DDC as the classification schema? It is that definition in your MARC record that calls in this sub ūüėÄ

Below you can see the _split_ddcn subroutine as on date of this post.

sub _split_ddcn {
    my ($ddcn) = @_;
    $_ = $ddcn;
    s/\///g;   # in theory we should be able to simply remove all segmentation markers and arrive at the correct call number...
    my (@parts) = m/
        ^([-a-zA-Z]*\s?(?:$possible_decimal)?) # R220.3  CD-ROM 787.87 # will require extra splitting
        \s+
        (.+)                               # H2793Z H32 c.2 EAS # everything else (except bracketing spaces)
        \s*
        /x;
    unless (scalar @parts)  {
        warn sprintf('regexp failed to match string: %s', $_);
        push @parts, $_;     # if no match, just push the whole string.
    }

    if ($parts[0] =~ /^([-a-zA-Z]+)\s?($possible_decimal)$/) {
          shift @parts;         # pull off the mathching first element, like example 1
        unshift @parts, $1, $2; # replace it with the two pieces
    }

    push @parts, split /\s+/, pop @parts;   # split the last piece into an arbitrary number of pieces at spaces
    $debug and print STDERR "split_ddcn array: ", join(" | ", @parts), "\n";
    return @parts;
}

Note: The _split_ddcn was first submitted to the Koha codebase as part of C4::Labels::Label module by Chris Nighswonger on Jul 20, 2009, by which time the LC’s single segmentation mark rule was already long in place.

So now what?

There are a few options available to you at this point.

(a) If you know what you are doing, you can modify the _split_ddcn sub routine so that it does not discard the “/” and handles the call number as you want it to. (Non trivial and not recommended)

dontsplit

(b) Go¬†to “Manage Layouts” and editing your specific layout by un-checking the option¬†“Split call number“. If you do this then your call number will be printed AS-IS as a single line of text. This means, if the call number is longer that the size of your labels, as they will be at several point in time, you have a *problem*

(c) Keep an eye out to this bug report filed by Katrin Fisher from earlier this year, where she has said:

Currently the call number splitting seems to be mostly implemented for DDC and LC classifications. Those are both not very common in Germany and possibly other countries. A lot of our libraries use their own custom classification schemes so the call number splitting is something that should be individually configurable.

The bad new is that so far no one has responded to this bug, simply because to Koha developers servicing clients using LC / DDC, this is not a priority. So either you can wait with the hope that someone soon will attend to this bug OR you write this functionality yourself OR you sponsor a developer to write it for you.

(d) Take the item call number listing out of Koha as a CSV file and use a 3rd party tool, e.g, gLabels to generate your spine labels.

References:

[1] https://www.loc.gov/aba/dewey/segmentation.html

[2] Dewey_Decimal_Classification – Administration_and_publication

[3] “Sweet segment solution” from 025.431: The Dewey blog