Generate a single sheet custom MARC21 framework in 2 minutes

For intermediate Koha ILS users who wish to quickly generate a single tab MARC framework.

Last Thursday, Ashish Kumar Barik, librarian at our new client-partner Midnapore City College filed a support ticket asking for a custom single sheet MARC21 framework or what is more commonly referred to by LIS professionals as a “worksheet“. He wrote that he wanted the following tags 000, 003, 005, 008, 020, 040, 041, 044, 082, 100, 245, 250, 260, 300, 440, 490, 500, 504, 650, 700, 942 and that the sub-tags/fields should be set as in the default marc framework shipped with Koha. We promised him his new framework. Being new to this side of Koha, he of course had missed out two key fields without which his system would be rendered practically useless i.e. the two local use tags952 and 999. Koha uses 952 to handle holdings (item) information and 999 is purely an internal tag used to track the bibliographic records.

Now anyone who has ever setup a new MARC framework knows that it can be a laborious and time consuming task. Further, there are chances of introducing inadvertent human errors that may lead to error or bad data when used as a part of the framework. As a result, at L2C2 Technologies we have developed several well defined strategies to manage custom marc frameworks for our clients. In today’s blog, we are going to share the simplest of the techniques we use in cases like this. The outcome of this exercise is a 100% error free marc framework generated in less than 2 minutes.

LEGAL DISCLAIMER: The next steps involve directly accessing and making changes in the the Koha database. So use these instructions at your own risk, if you face any data loss, corruption or system errors we are not responsible.

The Steps

  1. We used a regex capable editor like Notepad++ to quote the fields mentioned by Ashish, so that 000, 003, 005, 008, 020, 040, 041, 044, 082, 100, 245, 250, 260, 300, 440, 490, 500, 504, 650, 700, 942 became ‘000’, ‘003’, ‘005’, ‘008’, ‘020’, ‘040’, ‘041’, ‘044’, ‘082’, ‘100’, ‘245’, ‘250’, ‘260’, ‘300’, ‘440’, ‘490’, ‘500’, ‘504’, ‘650’, ‘700’, ‘942’. And while we did that, we also added the following fields missing in his list i.e. ‘952’, ‘999’.
  2. Next we defined a new framework MCC1 (MCC Framework) by visiting Home -> Administration -> MARC bibliographic framework -> New Framework
  3. Next we copied the default framework into MCC1 as its base, since that is what Ashish had wanted. At this point, the MCC1 framework is exactly same as the default framework of Koha.
  4. Next we fired up the MySQL console and logged in with the user id and passwd from MCC’s koha-conf.xml, and chose Ashish’s database in this case koha_mcc for the next steps.
  5. Fired the following SQL query :
    UPDATE
       `marc_subfield_structure` 
    SET
       tab=0
    WHERE 
       `frameworkcode`='MCC1' 
    AND 
       `tagfield` IN ('000', '003', '005', '008', '020', '040', '041', '044', '082', '100', '245', '250', '260', '300', '440', '490', '500', '504', '650', '700', '942')
    AND
       `tab`!=0;

    MySQL client told us 152 rows were affected.

    EXPLANATION: This moved all 1XX to 9XX (except 952 and 999) marc fields into Tab 0. The images below help illustrate the condition after this step:

  6. The next step was to set the rest of the fields outside the list supplied by Ashish *plus* 952 and 999 to be ‘ignored’ by Koha when using the MCC1 framework. And thus the following SQL query:
    UPDATE 
        `marc_subfield_structure` 
    SET 
        `tab`='-1' 
    WHERE
        `frameworkcode`='MCC1'
    AND 
        `tagfield` NOT IN ('000', '003', '005', '008', '020', '040', '041', '044', '082', '100', '245', '250', '260', '300', '440', '490', '500', '504', '650', '700', '942', '952', '999') 
    AND
        `tab`!=0;

    This time MySQL reported that 3416 rows were updated.

  7. Our last step at the MySQL command line was the following query that removed the unwanted 0XX fields from Tab 0 :
    UPDATE 
        `marc_subfield_structure` 
    SET 
        `tab`='-1' 
    WHERE
        `frameworkcode`='MCC1'
    AND 
        `tagfield` NOT IN ('000', '003', '005', '008', '020', '040', '041', '044', '082', '100', '245', '250', '260', '300', '440', '490', '500', '504', '650', '700', '942', '952', '999');

    MySQL reported 341 rows were affected.

  8. Coming back to MCC’s Koha staff client, we did the most important thing i.e. running MARC Bibliographic framework test. The test came out clean without any error.
  9. That’s it! MCC’s custom MARC framework is ready for use. Click on the image below and then zoom in to see the details up close.

NECOPAC : a new z39.50 service from North Eastern India

The launch of NECOPAC the first volunteer-run, freely available, public z39.50 service in North Eastern India.

On the auspicious occasion of Rongali Bihu (for the Assamese) and Poila Boishakh (for the Bengali), on behalf of the NECOPAC team, I’m happy to announce the start of NECOPAC z39.50 service. This is the first freely available, public z39.50 (copy cataloging) service in North Eastern India. As on date there are 92,333 bibliographic records in the database, all of which are volunteer contributed. We are expecting more records to be contributed soon.

Origin of the project

The germ of the idea of NECOPAC started about 2 years back during a chance meeting of a group of young like-minded library professionals and technology specialists in Guwahati, Assam.

The north eastern Indian is home to 8 states – Assam, Manipur, Tripura, Meghalaya, Mizoram, Nagaland, Arunachal Pradesh and Sikkim. The region boasts of great cultural heritage and diversityr. Libraries are an important means of preserving and disseminating this rich heritage, traditional knowledge, literature and other creative output.

Freely available copy cataloging of Indian publications using z39.50 service, to this day, remains largely a distant dream for most Indian library professionals. The scenario in the North East is even more difficult when it comes to publications in the local languages from this part of India.

The NECOPAC z39.50 service is a volunteer-run collaborative attempt at bridging this service gap. It is the first freely available, public z39.50 service in the entire North Eastern India. For us to grow bigger and be able to serve more LIS professionals in the NE, we need your active support and contributions i.e. a copy of your bibliographic records.

DISCLAIMER : The bibliographic records are provided on AS-IS basis by this service and the NECOPAC team does not vouch for the correctness or accuracy of these records.

Connecting and downloading records from NECOPAC

You will need to setup your z39.50 client using these details:

Parameter Setting
Server name NECOPAC
Hostname z3950.necopac.in
Port 9999
Database biblios
Syntax MARC21 / USMARC
Encoding UTF-8

A knotty problem – an Arabic catalog that refused to work in Koha 17.11

Watching out for custom developed themes during Koha upgradation will save you a lot of grief.

Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth.

– Sherlock Holmes [by Arthor Conan Doyle]

Early last week my good friend Vimal Kumar Vazaphally asked me if he could connect me to a friend of his who was facing some trouble after migrating to Koha 17.11 from 3.18. I agreed to take a look, since the problems that Vimal sends my way from time to time are usually interesting rather than mundane. Soon I was talking to Hussain (‎حسين رضوي‎) who manages the systems at the Public Library of Imam Hussain (AS) Holy Shrine, at Bab Alqibla, Karbala, Iraq.

The problem

Hussain informed that things used to be fine in the older 3.18 system, but after migrating to 17.11, they could no longer use the cataloging editor properly as the value builder for 008 MARC21 tag i.e. Fixed-Length Data Elements-General Information was not being displayed properly anymore, and the data being pushed back was the default value.

The chase begins

After looking at his system over Teamviewer and not making much headway, I asked Hussain to send over a copy of his database and went to bed.

By next day he had sent over his 1.2 GB database. I had it hoisted up on a test 17.11.01 system we have around, the same one which we had used the week before to migrate Bangabasi College from their earlier Koha 3.14.06. Since it was a multi-tenanted setup I could see the staged Bangabasi catalog working OK in it. Once the zebra reindexing has been done, our sleuthing started, we were able to reproduce the error.

The steps to the Truth

The first thing to check was for MARC framework validation – Ok! that was clean. Next up – detailed manual check of the framework in use. But everything looked shipshape. So what we had here was a curious beast. Since we were on a multi-tenant system and one was working just fine, the problem therefore had to be coming up from somewhere in Hussain’s database. But from what exactly was the question. When I mentioned about it on #koha (the IRC channel of Koha ILS), @wizzyrea (Liz Rea from CatalystIT) said that the issue sounded familiar and asked me to check whether it was bug no. 19473. It was not. However, 19473 looked interested so I added myself to the bug for tracking its status in future, and as I did bugzilla prompted – “The next bug in your list is bug 19965”

Bug no. 19965 turned out to be Hussain’s bug which he had reported on the Koha bug tracking database on 12th Jan 2018. This was from before we had connected. As I read through the comments on the bug by @cait – Katrin Fischer our indomitable QA lead, and incidentally visiting India at the moment for LTC2018 in Goa, I had my light bulb moment! 😉 I ran the 008 value builder and checked the JavaScript console. Firefox gave some hint that there was something wrong, but it was Google Chrome that showed exactly what was wrong.

Apparently, we had a 404 (file not found) error for marc21_field_088.xml. Now this was the file that Koha uses to load the list of values possible in the 008 field. This is what drives the drop-downs in the 008 value builder. I knew that we certainly had the file on the system, which is why things had worked for Bangabasi. The error message showed that somehow Koha was looking for the marc21_field_088.xml at a different location – intranet-tmpl/imam/en/data/marc21_field_008.xml rather than where it should intranet-tmpl/prog/en/data/marc21_field_008.xml.

The story was clear – this Koha installation had a custom staff client theme / template installed with the name of imam. So now we checked the template system preference under “Staff client” of Administration -> System preferences. But no the system preference showed only prog as the solitary value. Knowing that rhe staff client system preferences values are principally driven via a YAML file inside Koha called staff_client.pref, I decided to go for the source, in other words – the database itself.

A simple query :

SELECT * FROM `systempreferences` WHERE variable = "template"

showed what I had suspected, the actual value was set to “imam“. Another line of SQL later

UPDATE `systempreferences` SET value = "prog" WHERE variable = "template"

I was ready to test it. And Voila! there was it was… all working again! 😀

An explanation and an update

Now, you may be wondering why did Koha show the system preference template as only “prog” while its actual value was set to “imam”… is that not a bug? Well, not really! The value in the database is supposed to be one from the possible values in the YAML code for that system preference. So when you actually add a different value into the database and do not update the value in the YAML script, this is the expected fallback behavior.

You may also wonder so what happened to the bug no 19965 that Hussain had filed in the Koha bugzilla. Since, I had been able to verify that this was not a Koha bug, rather a user infused data error, I closed the bug report with the status – “RESOLVED-INVALID” along with this comment.

For Hussain, the first level solution was to update his database and point the installation to use the “prog” staff client theme / template rather than “imam”. At the secondary level, he had to port his old 3.18 theme – “imam” over to 17.11, which is quite likely to be a non-trivial exercise, should he / the library authority choose to do that.

Queen Mission School Salt Lake your future catalog thanks you :-)

A high school adopts the use of authority records in their Koha ILS based library catalog.

Client-partners who work with L2C2 Technologies know how fond we are of authority records. Without basic authority control of the personal names and subjects, catalogs over times end up having a mish-mash of variously spelled names and subjects which are essentially one and the same. One of the particular horror stories from the time when we started cloud hosting of Koha in Eastern India in 2015 was a case when we encountered the subject heading of “Commerce” spelled with 19 different spellings in the catalog and it’s bengali variant “বাণিজ্য” spelled in 3 more ways. And that was just one of those things. The author name variations of the same author were even greater horrors. Yes, as you can tell by now, we feel rather strongly about the use of authority records, which essentially help create better quality catalog data and also provide an easy way to update and manage spellings of headings.

It makes us happy that the three young women working on the cataloging project of Our Lady Queen of the Mission School, Salt Lake have taken heed of our suggestion to use authority files since they are starting from scratch. They began with the most simple personal name authorised heading for authors and editors of the books in the school’s collection. This work is being carried out by Priya Show and Shreya Mullick under the leadership of Swarnali Mitra, Librarian. Priya had earlier participated a departmental workshop I had helped conduct at the Department of Library & Information Science, Calcutta University. For Shreya and Swarnali, I am working with them for the first time. But luckily for their project, all these young ladies are showing that they have what it takes – the vision and eagerness to do things the right way. Wishing them all the best.

Koha and the “magic” of XSLT – Part 2 : Show accession no. in OPAC results page

How to add 952$p (typically the accession no.) to your OPAC’s Results page display.

About 6 months back, we had posted about “Koha and the “magic” of XSLT : displaying new MARC fields on the OPAC“. This post can be thought of as its Part 2 as it introduces a couple of new concepts – (a) looping through a list of repeatable values and (b) punctuating these values for correct display. If XSLT or Koha with XSLT sound like something you are hearing for the first time, we strongly suggest that you first read the Part 1 first (see above).

The Backstory

Our tutorial style blog posts are usually the result of addressing some sort of user demand. In this case, this post came about because of Mr. Kalipada Jana, Librarian at Basanti Devi College, Kolkata. Yesterday he had filed a ticket on our helpdesk saying that he would like accession number(s) attached to each bibliographic record to be displayed on the OPAC results page. This is something that Koha does not do by default. But having seen such a display elsewhere he wanted to have the same.

The default Results page

chrome_2017-11-28_22-58-45

What the user wanted

chrome_2017-11-28_22-57-56

The Process

Koha stores it holdings item identification in MARC21 tag 952$p. The user here was using this field to store the individually unique accession numbers of their items in holdings. Now a bibliographic record may quite easily have multiple copies with separate accession numbers. So the XSLT snipped we needed must do the following:

  1. Handle looping over to display repeated 952$p (when there were multiple copies of the same book.
  2. Separate the accn nos. with “commas” if there where multiple copies of a book and after the final accession number terminate the line with a period instead of a comma. And if there was only a single copy, then instead of comma use a period.
  3. .

  4. Suppress this accession no. display for bibliographic records that do not have any holdings.

The code

<!-- L2C2 - 2017-11-28 adding accn no to results page 952$p -->
<xsl:if test="marc:datafield[@tag=952]/marc:subfield[@code='p']">
<span class="results_summary accn_no">
<span class="label">Accession number(s): </span>
    <xsl:for-each select="marc:datafield[@tag=952]">
        <xsl:value-of select="marc:subfield[@code='p']"/>
        <xsl:choose>
            <xsl:when test="position()=last()">
                <xsl:text>.</xsl:text>
            </xsl:when>
            <xsl:otherwise>
                <xsl:text>, </xsl:text>
            </xsl:otherwise>
        </xsl:choose>
    </xsl:for-each>
</span>
</xsl:if>

In the first line (not the comment) we check if the MARC record has a 952$p field and sub-tag. If it does only then the rest of the code is executed. This is what helps us to suppress this new display for records without any accession numbers in its 952$p. The next couple of lines push out the necessary HTML code since the record has at least one accession number i.e. 952$p. In the subsequent two lines immediately after, we set up two loops. The first one loops through all the 952 (holdings) field in the bib record, while the inner loop looks up the $p sub-field. The inner most bit i.e. <xsl:choose><xsl:when test="position()=last()"> handled the punctuation. First if it checks if the currect MARCXML 952$p sub-node is the last one, if so it places a period and terminates the line. Otherwise, it places a comma as punctuation between the multiple accession numbers.

This code was added into the custom file which we named as MARC21slim2OPACResults-bdcl.xsl. This file is same as the default MARC21slim2OPACResults.xsl file, the only addition being the snippet given above. In order to get Koha to start using our file instead of the default, we placed the full path of our new file into the OPAC system preference – OPACXSLTResultsDisplay.

See it in action

If you want to see this XSLT in action, click here.

Koha and the “magic” of XSLT : displaying new MARC fields on the OPAC.

A short tutorial on using XSLT to make MARC fields in your data visible on the OPAC.

Earlier today, Suresh Kumar Tejomurtula, a member of the venerable LIS-Forum mailing list being run out of IISc Bangalore, posted a question on that list:

Under language codes of 008/35-37 and also under 041$a I added language code. for eg: tel for Telugu language. But I do not see any difference to the view of the record, except that the marc tags contains that values. Adding these fields data will enable library team to understand the language of the material. My question is, How will the users of the OPAC, who do not know about marc language codes will understand the item language [from just the 3-letter code sequence].

Short answer : Using XSLT

Much of Koha’s superpowers on the OPAC (as also on the staff client) side comes from its judicious use of XSLT. When we search for documents in Koha, the result that is returned from the database by way of the various perl modules that perform much of Koha’s internal plumbing, comes in as an XML (eXtensible Markup Language) document. More precisely it is returned as a MARCXML record. Readers of this blog who are familiar with the MarcEdit software may have often encountered a MARCXML record. Those who are not so familiar may well like to read up a bit from here before proceeding with this post.

So what is XSLT?

Wikipedia gives the definition of XSLT as –

“XSLT (Extensible Stylesheet Language Transformations) is a language for transforming XML documents into other XML documents, or other formats such as HTML for web pages, plain text or XSL Formatting Objects, which may subsequently be converted to other formats, such as PDF, PostScript and PNG.”

Where to start?

At the OPAC level, the XSLT magic is primarily driven by two files i.e. (a) MARC21slim2OPACResults.xsl and (b) MARC21slim2OPACDetail.xsl. Koha’s default settings of how we see the search results on the OPAC and the document specific details in the Normal view, are defined in these two files.

N.B. Directly editing these two files is strictly not advised unless you are a XSLT guru.

Lucky for us that Koha’s system preferences provide options to override the defaults by creating new XSLT files and telling Koha to use the new ones instead. The screenshot below shows the default setting.

2017-05-04_03-05-15

The two files are available inside a folder named xslt under the locale of the selected theme you are using e.g. /usr/share/koha/opac/htdocs/opac-tmpl/bootstrap/en/xslt/ on a package-based installation. By default most of us in India would be using the English locale i.e. “en”. The default Koha theme is “bootstrap”. Thus if you are using a custom theme and/or different locale look for the files under that.

Step #1 : Copying the XSLT files

We will be fetching the 3-letter language code from MARC 008/35-37 of each marc record. Since our Koha instance here is named as “demo”, we’ll make a copy of the file MARC21slim2OPACResults.xsl as MARC21slim2OPACResults-demo.xsl and the MARC21slim2OPACDetail.xsl as MARC21slim2OPACDetail-demo.xsl.

N.B. You can give any name to the copies you make, but it is suggested that the less adventurous you are with the names, easier it will be for you to trace your steps back if you make a mistake while editing them. And trust me you will make mistakes, at least initially.

Step #2 : Editing MARC21slim2OPACResults-demo.xsl

The MARC21slim2OPACResults.xsl is what drives the display of results of a search on the Koha OPAC. And it does not show the value of 008/35-37 i.e. MARC language code, while displaying the search result. To add this facility, we need to do the following to our copy i.e. MARC21slim2OPACResults-demo.xsl. Around line no. 70 or there about, there will be this line <xsl:variable name="controlField008-30-31" select="substring($controlField008,31,2)"/>. Add the following line after it – <xsl:variable name="controlField008_35-37" select="substring($controlField008,36,3)"/>.

Explanation: we’re defining a new variable named controlField008_35-37 and in it we are storing the 3-digit value found at 008/35-37 from another variable – controlField008.

The second step in editing this file is to add the code to check if (a) 008/35-37 actually exists (e.g. if you have a blank 008 field, 008/35-37 won’t exist) and (b) it is set as something other than “und(i.e. undefined). The following code does that first and proceeds to match the value found in 008/35-37 against 23 different languages. These languages are English and the 22 official languages of India as per the Eighth Schedule of the Indian Constitution. You can add or remove any language from this list as you require. However, when you do, please make sure that the codes you setup in this list here are coming from the Code Sequence document of the MARC Code list for languages as published by the US Library of Congress.

This means that you must also be using these same exact codes in your MARC records in Koha.

2017-05-04_05-40-55

Where we place this code will depend on where we want this language to listed in the results. In our case, we decided to add it between Material type and Format. And thus in our case, the code was added at line no 637. If you too want language to be displayed between the Material Type and Format, then look out for this line <xsl:if test="string-length(normalize-space($physicalDescription))"> and add this block before.

Saved the file and moved on to the next section.

Step #3 : Editing MARC21slim2OPACDetail-demo.xsl

The process is similar to what we just did above. Open the file MARC21slim2OPACDetail-demo.xsl for editing and go to line no. 51 (or there about) and look for this line <xsl:variable name="controlField008" select="marc:controlfield[@tag=008]"/>. Add the following line immediately after that: <xsl:variable name="controlField008_35-37" select="substring($controlField008,36,3)"/>.

2017-05-04_06-02-17

Since we wanted the “Language of document” to come after Material type, we added the code immediate after the node <xsl:if test="$DisplayOPACiconsXSLT!='0'"> is closed but before <!--Series: Alternate Graphic Representation (MARC 880) -->; in our case, this worked out to be around line no. 195. Finally, we saved and closed the file.

Getting Koha to use the new XSLT files

The Koha system preference OPACXSLTResultsDisplay was changed from its original setting (i.e. “default”) and set to the path of our new file MARC21slim2OPACResults-demo.xsl i.e. /usr/share/koha/opac/htdocs/opac-tmpl/bootstrap/en/xslt/MARC21slim2OPACResults-demo.xsl. Likewise the other system preference OPACXSLTDetailsDisplay was changed to /usr/share/koha/opac/htdocs/opac-tmpl/bootstrap/en/xslt/MARC21slim2OPACDetail-demo.xsl. It was time to test our XSLT modifications.

And it works!

Since pictures are said to be worth a 1000 words, we will let before and after screen grabs do the talking here. Also if anyone wants to see directly how it actually looks after applying the changed XSLT, visit the URL http://demo-opac.l2c2academy.co.in/cgi-bin/koha/opac-detail.pl?biblionumber=18

Before : Using the default MARC21slim2OPACResults.xsl

chrome_2017-05-04_07-05-24

After : Using the new MARC21slim2OPACResults-demo.xsl

chrome_2017-05-04_07-17-09

Before : Using the default MARC21slim2OPACDetail.xsl

chrome_2017-05-04_07-48-28

After : Using the new MARC21slim2OPACDetail-demo.xsl

chrome_2017-05-04_07-49-09

Adding autocomplete support to MARC21 260 / 264 (imprint) fields in Koha

Adding auto-complete feature to Koha’s MARC21 260 / 264 field (Imprint – place, name of publisher, distributor etc) as a cataloging aid.

As per LC’s AACR2 (as well as RDA) instructions, the imprint information as captured in MARC21 field 260 (AACR2) and 264 (RDA) should be *transcribed* and not *recorded* using the principle – “Take What You See and Accept What You Get” [1]. As a result, the 260$a (place) and 260$b (publisher, distributor etc) are usually not handled as fields whose values are controlled using authorized values.

Up until 2001, the 260 field was a NR field. It was made repeatable to accomodate frequent publisher name changes.

Since the 260$a and 260$b are not usually guided by authorized values, it is noted (at least in the Indian sub continent context) that catalogers often make typographical errors while transcribing the data, e.g. “Pearson Education” may be inadvertantly added as “Peerson Education” or as even as “Pearshon education”, while “Kolkata” may have been entered both as “Kolkata” as well as “Kolkatta” or even as “Kolkhata”. Without authority control of the field, this cataloging quality check is often overlooked. Errors like this often end up affecting the result of advanced searches or custom SQL reports.

Luckily for us, Koha ILS uses jquery extensively while (via jquery-ui) provides for nice autocomplete widgets, like the ones we see in action when we type in part of the borrower’s name in patron search (checkout) or in the authoritiy headings search, where entering 3 characters triggers the AJAX based lookup with the option to select one of the offered list *OR* to type in our own.

AJAX stands for “Asynchronous Javascript and XML”. In simple terms it encompasses a set of web development technique that allows us to fetch and load data from a remote server into our currently open page, without requiring us to refresh / reload the page. [2]

Recently a client requested that we offer them a way to look up publisher names and place names (for field 260) already entered into their Koha instance, without having to type it all in every time. For example in their database they already had the following publisher names entered – “Pearson Education, Prentice Hall India, PacktPub, Press Trust of India” etc. Now they if they encountered an item that was from these publishers they should be able to pull up a list just by entering “P” into the 260$b field and then be able to select the one applicable. And if they encountered a publisher name say “Penguin Books”, they should be able to type it in as well.

Koha 16.11 ships with 3 (three) different ysearch.pl scripts that show us how to achieve this. You can find out which ones these are by using the command `locate ysearch.pl`. NOTE: You may be required to run `sudo updatedb` once before locate finds the files. For our requirement we modeled our script which we’ll call 260search.pl on /usr/share/koha/intranet/cgi-bin/cataloguing/ysearch.pl. You can grab a copy of 260search.pl from L2C2’s github repo here [3]. The script returns a JSON based result set if results matching your input is found.

Remember that **every** script that Koha executes, needs it executable bit set, and so does this one. Therefore, do *not* forget to set the executable bit for the script with `sudo chmod a+x /usr/share/koha/intranet/cgi-bin/cataloguing/260search.pl` before you proceed to the next step.

Step #2 : Enabling the fields

With the script in place, we now need to turn to IntranetUserJS system preference and enter the following jquery snippet to enable autocomplete in 260$a and 260$b :

$(document).ready(function(){
  $( '[id^="tag_260_subfield_a"]' ).autocomplete({
    source: function(request, response) {
      $.ajax({
        url: "/cgi-bin/koha/cataloguing/260search.pl",
        dataType: "json",
        data: {
          term: request.term,
          table: "biblioitems",
          field: "place"
        },
        success: function(data) {
          response( $.map( data, function( item ) {
            return {
              label: item.fieldvalue,
              value: item.fieldvalue
            }
          }));
        }
      });
    },
    minLength: 1,
  });
  $( '[id^="tag_260_subfield_b"]' ).autocomplete({
    source: function(request, response) {
      $.ajax({
        url: "/cgi-bin/koha/cataloguing/260search.pl",
        dataType: "json",
        data: {
          term: request.term,
          table: "biblioitems",
          field: "publishercode"
        },
        success: function(data) {
          response( $.map( data, function( item ) {
            return {
              label: item.fieldvalue,
              value: item.fieldvalue
            }
          }));
        }
      });
    },
    minLength: 1,
  });
});

A video of autocomplete in action

Conclusion

By tweaking the 260search.pl script or even by completely re-writing it to use the various search functions shipped by Koha inside its /usr/share/koha/lib directory on a .deb package based installation, you can do so much more than possible with this simple hack. Happy hacking! 🙂 [4]

References:

[1] http://www.loc.gov/catworkshop/RDA%20training%20materials/LC%20RDA%20Training/Module1IntroManifestItemsSept12.doc

[2] https://developer.mozilla.org/en-US/docs/AJAX/Getting_Started

[3] https://gist.github.com/l2c2technologies/7d0449dcb80c90880381ef4571003d1d

[4] http://catb.org/jargon/html/H/hack.html

JQuery tips for Koha : Adding easy to use indicator picklists

Adding picklists for selecting indicators for MARC tags used in Koha’s cataloging worksheets.

During data audits of users’ MARC21 data, quite frequently we find that most, if not all, records are often without any form of use of indicators. Trained library professionals often give a sheepish grin when asked why they didn’t add them while cataloging the documents. 😉 But trained librarians are not the only ones who work with Library systems like Koha. There are many people who find themselves working in a library without a formal training or sufficient theoretical background on MARC21. Generally speaking reasons for not adding the indicators range from:

  • Lack of practise – thus unsure of the correct indicator to use.
  • Lack of awareness – i.e. untrained people with a very basic knowledge of cataloging
  • Lack of user-friendly mechanism to input indicators
  • And lastly – sheer laziness

Now, about the last one we can’t do anything about, however the rest of the reasons might use a bit of leg-up! So here goes the newest tutorial on how to add easy-to-use picklists to help us correctly populate the indicators.

According to the Design Principles of MARC21, indicators form a part of the family of content designators [1]. As defined, an indicator is :

A data element associated with a data field that supplies additional information about the field. An indicator may be any ASCII lowercase alphabetic, numeric, or blank.

For this tutorial we will focus on MARC21 bibliographic data fields 100 and 110 i.e. Main Entry Personal Name and Main Entry Corporate Name respectively. We will not touch the Koha template files at all, rather as per the global best practice for Koha ILS, we will utilize only JQuery (JavaScript) and HTML via the Koha system preference IntranetUserJS.

Step #1 – Finding out the DOM nodes

We will start by going to Home > Cataloging > Add MARC record in Koha and select the framework we want to work on. In this case we chose to work with the “Default framework” that is shipped with Koha. We used Google Chrome’s Developer Tools Inspect option [2] to find out what is the id of the selector (DOM node) we need for Main Entry Personal Name.

Since we need space to setup the picklist we chose to use the free space available on the div that displays information about the field that follows immediate after it. As you can see in the image below that div has an id identifying it, which is very good for us, since it makes selecting the DOM node absolutely painless.

blog_01

It should noted that when Koha renders the cataloging interface, it suffixes the HTML element IDs with a random number (one for each new tag). In this case, the id was div_indicator_tag_100_838390 where “838390” is the random suffix number. We needed to latch on to the first part i.e. div_indicator_tag_100.

Step #2 – Let the JQuery magic work

We have to add the select dropdown picklists right after the text on the div_indicator_tag_XXX DIVs. The value we will use for the indicators will come from here and here respectively.

$(document).ready(function(){
if ( $("#cat_addbiblio") ) {	// only while adding biblios
  $('div[id^="div_indicator_tag_100"]').append(' <label for="tag_100_indicators">Apply Ind1, Ind2</label> <select id="tag_100_indicators"><option>-Select-</option><option value="1">1 - Surname</option><option value="0">0 - Forename</option><option value="3">3 - Family name</option></select>');
  $('div[id^="div_indicator_tag_110"]').append(' <label for="tag_110_indicators">Apply Ind1, Ind2</label> <select id="tag_110_indicators"><option>-Select-</option><option value="2">2 - Name in direct order</option><option value="0">0 - Inverted name</option><option value="1">1 - Jurisdiction name</option></select>'); };   // end if
});
});

blog_02

While that added the picklists, we still have to add the actual logic that will allow the indicators to be populated on selecting from the list. Again we will turn to JQuery for the following snippet:

$(document).ready(function(){
  $('#tag_100_indicators').click(function(){
    var what_clicked_100 = $('#tag_100_indicators').val();
    if ( !isNaN(what_clicked_100) ) {
      $('input[name^="tag_100_indicator1"]').val(what_clicked_100);
      $('input[name^="tag_100_indicator2"]').val("#");
    } else {
      $('input[name^="tag_100_indicator1"]').val("");
      $('input[name^="tag_100_indicator2"]').val("");
    }
  });
  $('#tag_110_indicators').click(function(){
    var what_clicked_110 = $('#tag_110_indicators').val();
    if ( !isNaN(what_clicked_110) ) {
      $('input[name^="tag_110_indicator1"]').val(what_clicked_110);
      $('input[name^="tag_110_indicator2"]').val("#");
    } else {
      $('input[name^="tag_110_indicator1"]').val("");
      $('input[name^="tag_110_indicator2"]').val("");
    }
  });
});

The code above is listening to see when we click and select a value from the picklists i.e. when we trigger a click JavaScript event. Next it checks if we had selected a real value OR whether we had just “clicked” on the placeholder “-Select-” option that no value. And lastly based on what we had selected it sets the ind1 and ind2 values according.

blog_04

Conclusion

In this manner we can add easy-to-use picklists for indicators. Since it is now only a matter of selecting from the available values, it also reduces significantly the scope for typographical errors during data entry into the indicator boxes. Before we leave for today, do note that the second code listing may be better handled as a JavaScript function to which the references are passed to by a handler hook. Doing so would make for a cleaner and leaner implementation of this concept especially if you are planning to set it up for all the non control MARC21 fields you use. Also, you may wish to implement the selected dropdown value check using something other than IsNan [3].

References

[1] “MARC 21 Specifications for Record Structure, Character Sets, and Exchange Media – RECORD STRUCTURE (2000)” https://www.loc.gov/marc/specifications/specrecstruc.html

[2] “Chrome DevTools” – https://developers.google.com/web/tools/chrome-devtools/

[3] isNaN() – https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/isNaN

Customizing Koha’s MARC21 frameworks? Know the rules or get help!

Either you know what you are doing or take time to learn or invest in quality support. Fail on all three counts and you are quite literally asking for an operational nightmare.

Recently a young colleague Sri Ashkar K. from Thiruvananthapuram, Kerala (India) ran into a problem. He works as a librarian with Mathrubhumi, a major media house from Kerala. Specifically he needed to have a LMS solution to efficiently manage their collection of entertainment (mostly movies) related CDs and DVDs. For him the LMS was hosted Koha. However when he tried to issue an item i.e. a movie CD, he was stumped by this error every time:

Software error
No branchcode argument passed to koha::Calender->new at /var/koha_all/mathrubhumi/lib/C4/Circulation.pm line 3558

Being on a hosted Koha platform, he approached his service provider for support. He shared with them all the relevant screenshots leading to error detailed above.

The provider’s tech support could not identify the issue and instead informed him that they could perform checkouts (issue) without any errors. As Ashkar persisted, the service provider’s support desk asked him to provide remote desktop sharing using Teamviewer so that they could see “his problem” in action. Installing Teamviewer needed clearance from his IT department which required time and thus Ashkar’s checkout problem continued to linger. Finally about 10 days back he posted about it on the official Facebook page of Koha Library System Project, asking for suggestions to resolve it.

The first flag was raised by fellow Koha dev Mark Tompsett when he asked:

“/var/koha_all/mathrubhumi/lib/C4/Circulation.pm” — That is not a standard installation path. How did you install this? And what version?

Ashkar replied that since the software was hosted, he did not know the installation details. This got my attention! If he was on hosted Koha, why was he turning to the community for support? What was his service provider doing in the first place? I decided to find out more. That’s when I discovered the details of his situation. Desperate for help, he provided me with superlibrarian access to his hosted Koha’s staff client interface. I logged in and found that the problem was very real. In fact, I found out a few rather *disturbing* things.

The hosted Mathrubhumi Koha instance wasn’t running on the stable version (which is 16.05.05 at the time of writing) of Koha ILS. In fact, it was running on an unstable development version (at the time of this writing it was using Koha 16.0600023). Development versions are not GA releases and are *never* meant for production use, they are meant for use by testers and developers. And secondly, I could not do a MARC21 export for his bibliographic data.

That set alarm bells ringing in my head and so with Ashkar’s approval, I created a backup of his Koha database and installed the backup on L2C2’s test server running the latest stable 16.05.x version.

The first clear indication of what was wrong came soon after running sudo koha-rebuild-zebra -v -f mathrubhumi successfully without any error. A wildcard search from both OPAC as well as the staff client failed to return a single result, even though the Zebra indexer and output logs showed no error. However, it was possible to access a record by directly accessing it by biblionumber.

Running the “MARC Bibliographic framework test” to check the MARC structure provided the answer. Sure enough there were two major errors as shown below:

homebranch NOT mapped the items.homebranch field MUST :

  • be mapped to a MARC subfield,
  • the corresponding subfield MUST have “Authorized value” set to “branches”
holdingbranch NOT mapped the items.holdingbranch field MUST :

  • be mapped to a MARC subfield,
  • the corresponding subfield MUST have “Authorized value” set to “branches”

The question now was to identify *which* MARC21 framework since he had three (03) of them.

marcframe1
Ashkar’s MARC21 frameworks

Checking the “MOVIES” framework, it was found that both 952 $a (homebranch) and $b (holdingbranch) were set to ignore in the Managed in tab dropdown. This explained the error displayed by “MARC Bibliographic framework test”. To know more about the the 952 MARC21 field in Koha, please read Holdings data fields (9xx) from the Koha community wiki.

The Fix

It was a simple matter of setting both 952 $a and $b to “items(10)” for the option Managed in tab. This took care of the “MARC Bibliographic framework test” error.

However, that was only the first part of the solution. Except two, none of his other 23 bibliographic records had their homebranch and holdingbranch defined. It was time for a batch item modification from the Tools page (Home > Tools). This has been covered in details in an earlier blog post – “Koha’s MARC modification templates comes to the rescue“, so if the topic sounds unfamiliar, it is suggested that you read that post first.

In order to find out all the barcodes that needed to be used to update the records, the following SQL report was used:

SELECT items.barcode 
  FROM items 
  LEFT JOIN biblioitems ON (items.biblioitemnumber=biblioitems.biblioitemnumber) 
  LEFT JOIN biblio ON (biblioitems.biblionumber=biblio.biblionumber);

With the list of barcodes in hand, it was time for the final steps:

  1. Load up barcodes for the records to be bulk modified
    marcframe4
  2. Select the two fields that we wanted to update – homebranch and holdingbranch
    marcframe2
  3. Select the actual branch option for both and click on Save
    marcframe3
And we were done! 🙂

The Explanation

Understanding the error is quite simple if you know how circulation works inside Koha. A checkout operation needs to know a few basic things – (a) who owns the item; (b) where is the item presently located; (c) what to set as the issue and due dates and (d) who is taking it. Since the items attached to bibliographic records created using the MOVIES MARC21 framework did not have their homebranch and holdingbranch defined, at the time of checkout, as Koha tried to set the issue date and calculate the due date, using the date functions from the Koha::Calender object, it failed to do so. That’s what gave Ashkar his error and prevented him from checking out an item.

This still left one question unanswered – why did Ashkar’s hosting provider keep insisting that everything was working OK at their end and wanted him to provide them with Teamviewer access instead. My best guess is they were checking out the system using only the MARC21 frameworks which *they* had shipped i.e. default and fast add (FA) frameworks. Since records generated using these two frameworks (quite correctly) had 952 $a and $b set, none of these triggered Ashkar’s error during checkout. They certainly did not need Teamviewer access, the error in Ashkar’s framework should have been easily detected and quickly fixed. In fact, it took less than 3 mins to take care of it. But they failed, which is why it is important to either invest in your own skill development (read RTFM) OR invest in quality support.

“If you pay peanuts, you get monkeys” – James Goldsmith

Moral of the story: If you work with service providers whose front line tech-support is staffed with inexperienced people, be prepared for the long haul and self support yourself. Caveat Emptor!

Koha spine label is not printing the “/” in your call numbers? Here is why.

If you have defined DDC as your classification source and have a “/” in your Koha item call number, it is not going to be displayed when generating spine labels. If you are in hurry or you are aware of the segmentation mark, you can jump straight to the section The Answer.

The “Problem”

Earlier in the day a fellow user Dyuti Samanta came up with a question :

“Sir, I’m trying print spine labels from Koha. However I see that Koha does not print the the front slash (“/”) in my itemcallnumber, even though the same is recorded in my MARC record and is otherwise displayed by Koha elsewhere. For example, the “CHA / L” in “025.4 CHA / L” is being printed as “CHAL”. So where is the problem, how can I fix it?”

The Background

Dyuti’s question made me smile. And instead of immediately telling him about the “why” I pointed him to a comment left by Anamika Das on Vimal Kumar Vazaphally‘s blog post – “Spine label creation” saying “You are not alone with that question! ;-)”.

A call number typically consists of Dewey class number + book number i.e. Cutter number (or some other means of alphabetic arrangement). The frontslash “/” is deemed as a segmentation mark (ala prime mark of in C-I-P records) in the universe of Dewey Decimal Classification[1]. Up until DDC 22 published in 2003 [2], the slash or the prime mark was used to mark the start of every standard subdivision (notation from Table 1) as well as the end of the Abridged number. However, this rule changed from DDC 22 onwards (September 1, 2005 to be exact) and remains extant for the current edition i.e. DDC 23 published in 2011. The new rule has been that only *one* single segmentation mark may be used and that too only for marking the end of the abridged number [3].

A prior and post example straight from Library of Congress

Before DDC 22 – 551.21/09797/84

DDC 22 onward – 551.210979/84

Further, if you follow LC and OCLC norms, while Dewey class number in MARC21 field 082 can definitely have (since Sep 1, 2005) a *single* segmentation mark, the call number should never have one. With this background story in place let’s look at Koha to understand what is happening here.

The Answer

The particular Koha code that has taken out the the slash from both Dyuti and Anamika’s call numbers resides in C4::Labels::Label Perl module which is located at /usr/share/koha/lib/C4/Labels/Label.pm. Even more specifically, it is the _split_ddcn subroutine in Label.pm that is taking out the “/“. As we have already noted, under LC rules, call numbers (unlike Dewey class numbers in 082) can’t have segmentation marks. Thus it takes out any “/” embedded in your call number while processing the spine label. Very specifically, it is this line in the _split_ddcn subroutine: s/\///g; # in theory we should be able to simply remove all segmentation markers and arrive at the correct call number that does it. And just why does _split_ddcn get invoked? Well, it is because of something you did during cataloging, remember that you had recorded DDC as the classification schema? It is that definition in your MARC record that calls in this sub 😀

Below you can see the _split_ddcn subroutine as on date of this post.

sub _split_ddcn {
    my ($ddcn) = @_;
    $_ = $ddcn;
    s/\///g;   # in theory we should be able to simply remove all segmentation markers and arrive at the correct call number...
    my (@parts) = m/
        ^([-a-zA-Z]*\s?(?:$possible_decimal)?) # R220.3  CD-ROM 787.87 # will require extra splitting
        \s+
        (.+)                               # H2793Z H32 c.2 EAS # everything else (except bracketing spaces)
        \s*
        /x;
    unless (scalar @parts)  {
        warn sprintf('regexp failed to match string: %s', $_);
        push @parts, $_;     # if no match, just push the whole string.
    }

    if ($parts[0] =~ /^([-a-zA-Z]+)\s?($possible_decimal)$/) {
          shift @parts;         # pull off the mathching first element, like example 1
        unshift @parts, $1, $2; # replace it with the two pieces
    }

    push @parts, split /\s+/, pop @parts;   # split the last piece into an arbitrary number of pieces at spaces
    $debug and print STDERR "split_ddcn array: ", join(" | ", @parts), "\n";
    return @parts;
}

Note: The _split_ddcn was first submitted to the Koha codebase as part of C4::Labels::Label module by Chris Nighswonger on Jul 20, 2009, by which time the LC’s single segmentation mark rule was already long in place.

So now what?

There are a few options available to you at this point.

(a) If you know what you are doing, you can modify the _split_ddcn sub routine so that it does not discard the “/” and handles the call number as you want it to. (Non trivial and not recommended)

dontsplit

(b) Go to “Manage Layouts” and editing your specific layout by un-checking the option “Split call number“. If you do this then your call number will be printed AS-IS as a single line of text. This means, if the call number is longer that the size of your labels, as they will be at several point in time, you have a *problem*

(c) Keep an eye out to this bug report filed by Katrin Fisher from earlier this year, where she has said:

Currently the call number splitting seems to be mostly implemented for DDC and LC classifications. Those are both not very common in Germany and possibly other countries. A lot of our libraries use their own custom classification schemes so the call number splitting is something that should be individually configurable.

The bad new is that so far no one has responded to this bug, simply because to Koha developers servicing clients using LC / DDC, this is not a priority. So either you can wait with the hope that someone soon will attend to this bug OR you write this functionality yourself OR you sponsor a developer to write it for you.

(d) Take the item call number listing out of Koha as a CSV file and use a 3rd party tool, e.g, gLabels to generate your spine labels.

References:

[1] https://www.loc.gov/aba/dewey/segmentation.html

[2] Dewey_Decimal_Classification – Administration_and_publication

[3] “Sweet segment solution” from 025.431: The Dewey blog