Customizing Koha’s MARC21 frameworks? Know the rules or get help!

Either you know what you are doing or take time to learn or invest in quality support. Fail on all three counts and you are quite literally asking for an operational nightmare.

Recently a young colleague Sri Ashkar K. from Thiruvananthapuram, Kerala (India) ran into a problem. He works as a librarian with Mathrubhumi, a major media house from Kerala. Specifically he needed to have a LMS solution to efficiently manage their collection of entertainment (mostly movies) related CDs and DVDs. For him the LMS was hosted Koha. However when he tried to issue an item i.e. a movie CD, he was stumped by this error every time:

Software error
No branchcode argument passed to koha::Calender->new at /var/koha_all/mathrubhumi/lib/C4/Circulation.pm line 3558

Being on a hosted Koha platform, he approached his service provider for support. He shared with them all the relevant screenshots leading to error detailed above.

The provider’s tech support could not identify the issue and instead informed him that they could perform checkouts (issue) without any errors. As Ashkar persisted, the service provider’s support desk asked him to provide remote desktop sharing using Teamviewer so that they could see “his problem” in action. Installing Teamviewer needed clearance from his IT department which required time and thus Ashkar’s checkout problem continued to linger. Finally about 10 days back he posted about it on the official Facebook page of Koha Library System Project, asking for suggestions to resolve it.

The first flag was raised by fellow Koha dev Mark Tompsett when he asked:

“/var/koha_all/mathrubhumi/lib/C4/Circulation.pm” — That is not a standard installation path. How did you install this? And what version?

Ashkar replied that since the software was hosted, he did not know the installation details. This got my attention! If he was on hosted Koha, why was he turning to the community for support? What was his service provider doing in the first place? I decided to find out more. That’s when I discovered the details of his situation. Desperate for help, he provided me with superlibrarian access to his hosted Koha’s staff client interface. I logged in and found that the problem was very real. In fact, I found out a few rather *disturbing* things.

The hosted Mathrubhumi Koha instance wasn’t running on the stable version (which is 16.05.05 at the time of writing) of Koha ILS. In fact, it was running on an unstable development version (at the time of this writing it was using Koha 16.0600023). Development versions are not GA releases and are *never* meant for production use, they are meant for use by testers and developers. And secondly, I could not do a MARC21 export for his bibliographic data.

That set alarm bells ringing in my head and so with Ashkar’s approval, I created a backup of his Koha database and installed the backup on L2C2’s test server running the latest stable 16.05.x version.

The first clear indication of what was wrong came soon after running sudo koha-rebuild-zebra -v -f mathrubhumi successfully without any error. A wildcard search from both OPAC as well as the staff client failed to return a single result, even though the Zebra indexer and output logs showed no error. However, it was possible to access a record by directly accessing it by biblionumber.

Running the “MARC Bibliographic framework test” to check the MARC structure provided the answer. Sure enough there were two major errors as shown below:

homebranch NOT mapped the items.homebranch field MUST :

  • be mapped to a MARC subfield,
  • the corresponding subfield MUST have “Authorized value” set to “branches”
holdingbranch NOT mapped the items.holdingbranch field MUST :

  • be mapped to a MARC subfield,
  • the corresponding subfield MUST have “Authorized value” set to “branches”

The question now was to identify *which* MARC21 framework since he had three (03) of them.

marcframe1
Ashkar’s MARC21 frameworks

Checking the “MOVIES” framework, it was found that both 952 $a (homebranch) and $b (holdingbranch) were set to ignore in the Managed in tab dropdown. This explained the error displayed by “MARC Bibliographic framework test”. To know more about the the 952 MARC21 field in Koha, please read Holdings data fields (9xx) from the Koha community wiki.

The Fix

It was a simple matter of setting both 952 $a and $b to “items(10)” for the option Managed in tab. This took care of the “MARC Bibliographic framework test” error.

However, that was only the first part of the solution. Except two, none of his other 23 bibliographic records had their homebranch and holdingbranch defined. It was time for a batch item modification from the Tools page (Home > Tools). This has been covered in details in an earlier blog post – “Koha’s MARC modification templates comes to the rescue“, so if the topic sounds unfamiliar, it is suggested that you read that post first.

In order to find out all the barcodes that needed to be used to update the records, the following SQL report was used:

SELECT items.barcode 
  FROM items 
  LEFT JOIN biblioitems ON (items.biblioitemnumber=biblioitems.biblioitemnumber) 
  LEFT JOIN biblio ON (biblioitems.biblionumber=biblio.biblionumber);

With the list of barcodes in hand, it was time for the final steps:

  1. Load up barcodes for the records to be bulk modified
    marcframe4
  2. Select the two fields that we wanted to update – homebranch and holdingbranch
    marcframe2
  3. Select the actual branch option for both and click on Save
    marcframe3
And we were done! 🙂

The Explanation

Understanding the error is quite simple if you know how circulation works inside Koha. A checkout operation needs to know a few basic things – (a) who owns the item; (b) where is the item presently located; (c) what to set as the issue and due dates and (d) who is taking it. Since the items attached to bibliographic records created using the MOVIES MARC21 framework did not have their homebranch and holdingbranch defined, at the time of checkout, as Koha tried to set the issue date and calculate the due date, using the date functions from the Koha::Calender object, it failed to do so. That’s what gave Ashkar his error and prevented him from checking out an item.

This still left one question unanswered – why did Ashkar’s hosting provider keep insisting that everything was working OK at their end and wanted him to provide them with Teamviewer access instead. My best guess is they were checking out the system using only the MARC21 frameworks which *they* had shipped i.e. default and fast add (FA) frameworks. Since records generated using these two frameworks (quite correctly) had 952 $a and $b set, none of these triggered Ashkar’s error during checkout. They certainly did not need Teamviewer access, the error in Ashkar’s framework should have been easily detected and quickly fixed. In fact, it took less than 3 mins to take care of it. But they failed, which is why it is important to either invest in your own skill development (read RTFM) OR invest in quality support.

“If you pay peanuts, you get monkeys” – James Goldsmith

Moral of the story: If you work with service providers whose front line tech-support is staffed with inexperienced people, be prepared for the long haul and self support yourself. Caveat Emptor!

Setting specific ‘lockdown’ of Koha’s system preference options

Individual ‘lockdown’ of Koha’s system preference settings using a bit of jQuery and CSS.

The current stable version of Koha 16.05.4 ships with some 548 system preferences. These are stored in the ‘systempreferences‘ table in the database. Inside the Koha staff client, they are accessed by visiting the HomeAdministration > Global system preferences menu link. If this is the first time you are hearing about system preferences in Koha or you are not deeply familiar with them, it is suggested that you familiarize yourself with this chapter section of the Koha 16.05 manual.

The objective here is not prevent someone’s use of Free Software, but rather to ensure they are only committing pre-validated changes to the production server. Changes have consequences and whoever makes them should be fully aware of the impact of these changes.

While Koha’s per user access control feature does provide a way to allow or withhold an user’s access to view / edit the system preferences, it does so with an “all or none” approach i.e. either the user has access to *all* the system preferences or none. This lack of access control granularity can prove to be slightly undesirable under certain circumstances. For example, you want that certain settings should *not* be changed or not changed accidentally or not changed without first testing and validating the change on a staging system. In our case, on our managed systems we do not want the designated superlibrarian user at the client’s end to make changes to say the opacheader, opaccredits, OPACUserJS, OPACUserCSS, IntranetUserJS, IntranetUserCSS and OpacNavBottom system preferences on the production VM, without first testing the changes on a test VM.

The implementation

We implemented the setting specific ‘lockdown’ in the system preference settings using a bit of jQuery and CSS.

Step #1

First we identified the selectors we needed in order to enable the lockdown. The easiest (and recommended) way to do this is to ‘inspect‘ your target (i.e. ones you want to lock down) DOM elements on the System Preference administration page(s). As mentioned before we want to lockdown the following sysprefs: IntranetUserJS, IntranetUserCSS, OPACUserJS, OPACUserCSS, opacheader, opaccredits, OpacNavBottom. Looking at the DOM made it clear that we needed to work with the following id based selectors – pref_IntranetUserJS, pref_IntranetUserCSS, pref_OPACUserJS, pref_OPACUserCSS, pref_opacheader, pref_opaccredits and pref_OpacNavBottom respectively.

Step #2

The next step was to decide how tight we want to make the ‘lockdown’. We did not want it airtight, so here is what we did. We left the IntranetUserJS and IntranetUserCSS only disabled, but the rest we removed their respective textarea elements from the loaded DOM. Had we wanted things really tight, we could have do that same for the two disabled ones.

lockdown_01
Click on the image to view it at full size

Note: Should you use .remove() on all the elements above instead of setting the attribute to disabled, then the only way to get access to them would be by directly editing the IntranetUserJS syspref’s value in the database.

Step #3

We will also add hints to the label so that users can understand why they are not able to access the setting. See the green arrow on the left above for the code. Once done, save the IntranetUserJS syspref and exit. We are done.

Checking our work so far

Let us search for the OPACUserCSS system preference. We will see (as given below) that the editable textarea element is no longer present. Note the “Click to collapse” text without the editable textarea element holding the actual setting value. Also there is now a small lock icon against the label with the text explaining why the edit window is not present.
lockdown_00a

Unlocking the ‘lockdown’

What we have implemented so far will prevent someone with system preference edit permission from accidentally editing the ‘locked’ system preferences from the Admin page. In order to “unlock“, first we need to access the IntranetUserJS syspref which we had only disabled in this case.

Unlocking – Step #1

Right click on the IntranetUserJS syspref and select Inspect

lockdown_00b
If you did it correctly then element with id as pref_IntranetUserJS with be highlighted. Note the disabled attribute which is pointed to with the red arrow in the screenshot below:

lockdown_00c

Unlocking – Step #2

Double-click to select the disabled="disabled" attribute of the textarea element.

lockdown_00d

Unlocking – Step#3

Delete the disabled attribute, the textarea element should now look like this.

lockdown_00e

Unlocking – Step #4

Close the Developer tools window, but *do not* move out of the IntranetUserJS syspref yet! We still have work to do. You will see that the textarea is no longer disabled and is now open for editing. In order to remove the ‘lockdown’ on our system preferences, we need to comment out the jQuery code we had added earlier. We do this simply by wrapping the relevant code inside a C style /* [...] */ comment block. See the green arrows in the image below:

lockdown_00f
Click on the image to view it at full size

Unlocking – Step#5

Save the IntranetUserJS syspref and now try to access the OPACUserCSS syspref again. As you can see from the image below, the system preference is no longer locked and now open for editing.

lockdown_00g

Re-locking

Once we are done with making necessary changes we may wish to again ‘lockdown‘ the settings. We simply need to go back and edit the IntranetUserJS syspref and un-comment the locking code by removing the C style comment markers. Easy Peasy!

Switching content language on Koha OPAC with user interface locale switching

How to display custom content in the user’s own language on the OPAC.

Last week Mr. Ahmad Nasser from the Future University of Egypt reached out for a bit of help. The Koha OPAC provides certain sections / blocks on the OPAC e.g. OpacNav, OpacNavBottom, OpacNavBottom and OpacMainUserBlock etc. where libraries can add custom content / instructions / links / widgets to aid and inform their users better about their library and its services. Nasser’s case was interesting since he needed to cater to a bi-lingual readership where some users may prefer to read the information presented in Arabic rather than in English.

Development of language was the greatest break through of human technology. It helps us to communicate. But the same language when it is not the same for a group of people can create problem. How do a Bengali communicate with a Tamil, a Malayali with an Assamese when they do not understand the others’ language and they do not happen to speak English the global lingua-franca? Sort of like this line from this famous song pictured on Raj Kapoor in his super hit 1955 super hit –  Shree 420 that goes “mera joota hain Japani, yeh patloon engleesthani, saar pe topi russi….” (‘My shoes are Japanese, these pants are from England, the red hat on my head is Russian…’) – indeed how do we cater to this diversity!

trans_raj
Still image copyright: Shemaroo Videos

When it comes to a software like say Koha, the answer lies in localization – a process which allows a software to present information to its users in their own language of choice.

Koha’s user interface (UI) locale switching allows for users to switch the user interface language e.g. from default English to say Chinese (Taiwanese) or Hindi (India) as long as the language pack exists for Koha. However, this switching is not designed to tackle switching the language of the content in these custom blocks which we mentioned in the previous paragraphs.

Nasser wanted a way to display the content of say OpacMainUserBlock in Arabic when the user switched the user interface to Arabic and back in English when another user wanted to use the default language (i.e. english). This post highlights one ways by which Koha administrators / librarians can let their users a way to see the content in the language of their choice rather than an arbitrary default language or even worse a mish-mash of two or more languages.

This case is relevant to libraries in India as well, with our multitude of languages – 22 official languages at the last count – How do we serve content in English to our top 10 – 15% population, at the same time how do we address the rest of our population who are literate in their own language, all who may be some day be using Koha. Our records may be in the local regional language, but how about the added custom content? This solution works by looking at present locale[1] selected the user on the Koha OPAC.

The Solution

As I’ve mentioned this is not the only way to solve this problem. But it is probably the simplest *and* the cleanest one. And it does so by using three things:

  • The selected locale language of the Koha OPAC
  • One line of custom CSS placed into OpacUserCSS system preference
  • Exactly 3 lines of Javascript added to OpacUserJS system preference

In this blog post, we’re only looking at managing the OpacMainUserBlock – the central block on the OPAC, but the solution can be applied to every other blocks that access custom HTML markup – including OpacHeader, OpacCredits  as well as on “Koha as CMS” pages etc.

If you have never setup multiple language support on Koha, you can read up – “Installation of additional languages for OPAC and INTRANET staff client” and familiarize yourself first.

The Demo

I’ve set up a multiple language demo Koha installation with the following languages aside from the default English:

(a) Arabic (ar-Arab)
(b) Czech (cs-CZ)
(c) German (de-DE)
(d) Hindi (hi)
(e) Slovak (sk-SK)
(f) Chinese (Taiwanese: zh-Hans-TW)

The URL is https://demo-opac.l2c2academy.co.in/cgi-bin/koha/opac-main.pl where you can see this working in action. As you change the selected language and right click to see source code of the page, you will notice that the “lang” attribute of the “html” element changes to the language codes given inside the parentheses above. Below is a snapshot of 6 of the 7 languages as rendered in the HTML source once you change the language.

trans_all_src

Hint: That lang attribute is our locale identifier and it changes every time we select a different language. Try it out on the demo and see it for yourself.

Since this depends on using CSS to toggle the visibility of our local language content we are going to define a disabled class in our OpacUserCSS system preference like this:

/* disabled class */

.disabled {
   display: none;
}

In this example we will use a <div> element like given below:

<div class="en disabled">

 your local language content goes here

</div>

However we can use this technique on *any* HTML element whose visibility can be toggled by accessing its display CSS property [2]. We will need to add two extra classes to our HTML element – the first one class will be named as our lang attribute and the second class will be the disabled class. We’ll need to repeat this definition for each language that we want to deal with.

For your reference here is a listing of my OpacMainUserBlock for this example, please download and study it in order to understand the process better.

NOTE: For this example, I’ve selected a single paragraph from the entry on “Wikipedia” from the Arabic, Czech, German, English, Hindi, Slovak and Chinese Wikipedia.

Once, our custom HTML is in place, we will need a way to toggle their visibility (CSS display property) based on the user selected language locale via the lang attribute. For this we’ll use the following JQuery snippet in our OpacUserJS system preference:

$(document).ready(function() {

  var selectedlang = $('html')[0].lang;

  var buildClassString = ".".concat(selectedlang);

  $(buildClassString).removeClass('disabled');

});

The first line finds out the lang attribute of our <html> element. In the next line we build a string to hold the selector for the class (since classes are notified in JQuery selectors using a dot in front of the class name). And finally, in the third line, we remove the disabled class from the content whose language class matches the lang attribute. By removing the class from the element, we automatically cause its display CSS property to become visible.

What really happens behind the scenes

The custom HTML markup is first loaded with its visibility turned off. Once the page is loaded the document.ready() JQuery call looks up the current language selected and removes the display: none; CSS style from the element by removing the disabled class. As a result, the element and the content it is designated to display becomes visible. This whole cycle is repeated when we select another language. Thus, we are now able to provide our users with custom HTML markup and content based on the language they selected.

Reference

[1] “Locale (computer software) – Wikipedia, the free encyclopedia

[2] “CSS/Properties/display – W3C Wiki

Quick tip: Add “barcode” lookup to your OPAC’s search index selection downdown

If you wish to add an option to the OPAC search dropdown e.g. “Barcode”, you can achieve it with a single line of jquery. There is absolutely NO need to edit masthead.inc as suggested in BUG #8302 – http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=8302.This jquery one-liner does the job rather well, simply place it inside your OPACUserJS system preference.

$("select[name='idx']").append("<option value='bc'>Barcode</option>");

If you wish to know how I set the value to bc, I suggest you take a dive into this file ccl.properties in your Koha installation.

P.S. You can also search as bc: <your barcode number> in the search box. That works too, even without adding the option to the drop-down as you are directly passing a CCL query option back to the Koha search module.

This post is based on an earlier Facebook post published here https://www.facebook.com/l2c2technologies/posts/775648639191038 on Feb 23, 2015.

The 5-Minute Series: JavaScript or CSS – what is better at hiding the ‘No cover image available’ markup?

How to efficiently remove the ‘No cover image available’ placeholder using CSS rather than Jquery, when AmazonCoverImages or GoogleJackets can’t retrieve a cover image for your displayed title.

We really like our OPACs to display the cover images of the books and journals which we have often so painstakingly cataloged. Out of the box, Koha allows us to fetch and display book covers from several sources, both local and from data providers over the Internet. The commonest of these being (a) AmazonCoverImages and (b) GoogleJackets. While these two settings (either or both) will work for a large number of books, for users in India, there are also a lot that these two sources do not offer. Especially books in Indian languages or simply books printed without an ISBN (there are a lot of these in India).

When no image is found Koha displays the text “No cover image available” as a placeholder. A lot people would rather not see this. The Koha JQuery Library on the Koha Community wiki offers the following jQuery based approach:

$(document).ready(function(){
    $('.no-image').remove();
});

You simply place this snippet into the OPACUserJS system preference and hey Presto! these pesky “No cover image available” displays are history. Well, yes and no! Yes it works, and “no”, this may not prevent that text to be displayed, at least once. Why? well how about a slow PC and an even slower internet connection? Either one of the two or a combination of both will usually ensure that you get to take a look “No cover image available” before they are “removed” from the displayed page.

The simpler thing IMHO, is to take the Cascading Style Sheet (CSS) route, which as simple as placing the following one-liner into your OPACUserCSS system preference:

.no-image { display: none; }

no-image-removed

Not only will you *never* get to see the “No cover image available” markup anymore, no matter how slow your PC / Internet connection is, this is also more efficient, since rather than use jquery to first select and then .remove() a DOM element with a particular style class attribute, we simply re-define that they shouldn’t be displayed at all. The browser DOM does all the optimization in this approach.

Koha spine label is not printing the “/” in your call numbers? Here is why.

If you have defined DDC as your classification source and have a “/” in your Koha item call number, it is not going to be displayed when generating spine labels. If you are in hurry or you are aware of the segmentation mark, you can jump straight to the section The Answer.

The “Problem”

Earlier in the day a fellow user Dyuti Samanta came up with a question :

“Sir, I’m trying print spine labels from Koha. However I see that Koha does not print the the front slash (“/”) in my itemcallnumber, even though the same is recorded in my MARC record and is otherwise displayed by Koha elsewhere. For example, the “CHA / L” in “025.4 CHA / L” is being printed as “CHAL”. So where is the problem, how can I fix it?”

The Background

Dyuti’s question made me smile. And instead of immediately telling him about the “why” I pointed him to a comment left by Anamika Das on Vimal Kumar Vazaphally‘s blog post – “Spine label creation” saying “You are not alone with that question! ;-)”.

A call number typically consists of Dewey class number + book number i.e. Cutter number (or some other means of alphabetic arrangement). The frontslash “/” is deemed as a segmentation mark (ala prime mark of in C-I-P records) in the universe of Dewey Decimal Classification[1]. Up until DDC 22 published in 2003 [2], the slash or the prime mark was used to mark the start of every standard subdivision (notation from Table 1) as well as the end of the Abridged number. However, this rule changed from DDC 22 onwards (September 1, 2005 to be exact) and remains extant for the current edition i.e. DDC 23 published in 2011. The new rule has been that only *one* single segmentation mark may be used and that too only for marking the end of the abridged number [3].

A prior and post example straight from Library of Congress

Before DDC 22 – 551.21/09797/84

DDC 22 onward – 551.210979/84

Further, if you follow LC and OCLC norms, while Dewey class number in MARC21 field 082 can definitely have (since Sep 1, 2005) a *single* segmentation mark, the call number should never have one. With this background story in place let’s look at Koha to understand what is happening here.

The Answer

The particular Koha code that has taken out the the slash from both Dyuti and Anamika’s call numbers resides in C4::Labels::Label Perl module which is located at /usr/share/koha/lib/C4/Labels/Label.pm. Even more specifically, it is the _split_ddcn subroutine in Label.pm that is taking out the “/“. As we have already noted, under LC rules, call numbers (unlike Dewey class numbers in 082) can’t have segmentation marks. Thus it takes out any “/” embedded in your call number while processing the spine label. Very specifically, it is this line in the _split_ddcn subroutine: s/\///g; # in theory we should be able to simply remove all segmentation markers and arrive at the correct call number that does it. And just why does _split_ddcn get invoked? Well, it is because of something you did during cataloging, remember that you had recorded DDC as the classification schema? It is that definition in your MARC record that calls in this sub 😀

Below you can see the _split_ddcn subroutine as on date of this post.

sub _split_ddcn {
    my ($ddcn) = @_;
    $_ = $ddcn;
    s/\///g;   # in theory we should be able to simply remove all segmentation markers and arrive at the correct call number...
    my (@parts) = m/
        ^([-a-zA-Z]*\s?(?:$possible_decimal)?) # R220.3  CD-ROM 787.87 # will require extra splitting
        \s+
        (.+)                               # H2793Z H32 c.2 EAS # everything else (except bracketing spaces)
        \s*
        /x;
    unless (scalar @parts)  {
        warn sprintf('regexp failed to match string: %s', $_);
        push @parts, $_;     # if no match, just push the whole string.
    }

    if ($parts[0] =~ /^([-a-zA-Z]+)\s?($possible_decimal)$/) {
          shift @parts;         # pull off the mathching first element, like example 1
        unshift @parts, $1, $2; # replace it with the two pieces
    }

    push @parts, split /\s+/, pop @parts;   # split the last piece into an arbitrary number of pieces at spaces
    $debug and print STDERR "split_ddcn array: ", join(" | ", @parts), "\n";
    return @parts;
}

Note: The _split_ddcn was first submitted to the Koha codebase as part of C4::Labels::Label module by Chris Nighswonger on Jul 20, 2009, by which time the LC’s single segmentation mark rule was already long in place.

So now what?

There are a few options available to you at this point.

(a) If you know what you are doing, you can modify the _split_ddcn sub routine so that it does not discard the “/” and handles the call number as you want it to. (Non trivial and not recommended)

dontsplit

(b) Go to “Manage Layouts” and editing your specific layout by un-checking the option “Split call number“. If you do this then your call number will be printed AS-IS as a single line of text. This means, if the call number is longer that the size of your labels, as they will be at several point in time, you have a *problem*

(c) Keep an eye out to this bug report filed by Katrin Fisher from earlier this year, where she has said:

Currently the call number splitting seems to be mostly implemented for DDC and LC classifications. Those are both not very common in Germany and possibly other countries. A lot of our libraries use their own custom classification schemes so the call number splitting is something that should be individually configurable.

The bad new is that so far no one has responded to this bug, simply because to Koha developers servicing clients using LC / DDC, this is not a priority. So either you can wait with the hope that someone soon will attend to this bug OR you write this functionality yourself OR you sponsor a developer to write it for you.

(d) Take the item call number listing out of Koha as a CSV file and use a 3rd party tool, e.g, gLabels to generate your spine labels.

References:

[1] https://www.loc.gov/aba/dewey/segmentation.html

[2] Dewey_Decimal_Classification – Administration_and_publication

[3] “Sweet segment solution” from 025.431: The Dewey blog

A custom subject-wise report of titles with author name, no. of copies, subject name in serialized listing

A custom SQL report for Koha that generates subject wise title lists with author name, no. of copies, subject name and biblionumber, written in response to a reader query over email.

Last week Mr. Gautam Mukhopadhyay, Librarian, Chandrapur College in Burdwan, West Bengal wrote in with a request:

Respected Sir,

I’m writing this seeking a solution for the problem relating to a report generation from Koha. I want to get a list of titles under a particular broader subject field-tag (650). Quite a number of times I’ve checked from SQL Report. But all were in vain as those were not the same what I actually want to get. Following is the specimen of the opted report:
Sr. No.   Title     Author   Copy No.    Subject
  1         ……….     X          3             Bengali
Under the subject Bengali or English or whatsoever, I want to get the titles those are belong to that particular subject. However, it won’t be a problem if there are different reports for different subjects. It’s Ok. But the SQL Query should be a general query structure that can be applicable for all such reports on the titles belong to a broader subject like Bengali, History, Geography etc.
Sir, please let me know the query structure, if possible.
Regards,
GM

 Here is a possible solution for his request, which pretty much does what Mr. Mukhopadhyay had specified in his request. In this example we’ll use a sample MARC21 file which can be downloaded from here to try out this example. This dataset has a 14 unique bibliographic records with a total of 42 item (holdings) record, belonging to 03 specific broader subjects i.e. English, Economics and Political Science. As per Mr. Mukhopadhyay’s use-case, the MARC field 650 holds the broader subject classification. However, to match real world scenarios the 650 fields in some of the cases have other subject headings defined including narrower terms. Also additionally we are going to add an additional column to our report – the biblionumber, so that if required we can cross check a title in the report generated against the biblionumber in the database.

CAVEAT EMPTOR: If you are going to try out this example, we suggest that you define a new Koha library and import this MARC file into it. Mixing this sample data with your existing records is strongly advised against.

Step #1 – Create a new Koha instance and set it up
We are going to use the koha-create Debian command to create a new Koha instance and we shall call our instance as demo.
sudo koha-create --create-db demo

You may calls your instance by whatever name you like. If you are not aware of the koha-create command, please read up “Commands provided by the Debian packages“. Next we will do a default setup and proceed to define a Library that we’ll call “L2C2 Technologies Demo Library” identified by the code “MAIN”, using these instructions here.

NB.: To use the marc file used in this example you must set the library code for your demo branch as “MAIN”, the name (of the library branch) can be whatever you want it to be.

Step #2 – Define a new Authorize value category
Since our example marc file has biblios with only (a) English (b) Economics and (c) Political science, we will define a new authorized value category which we’ll call as SUBLOOKUP, under Home › Administration › Authorized values. Once setup our new authorized value SUBLOOKUP will look like this:
gmreport_02
This authorized value list will provide the subject selection list for our custom SQL report. So if you have more subjects you will need to add them here so that they look like this. The “%” in the Authorized value is *critical*, and if you want to be really strict about it, you can drop the preceding “%” and retain only the one at the end. However should you do that, your first 650 field *must* always be the broader subject heading that you wish to filter your report on.
Step #3 – Define our custom SQL report
We will go to Home › Reports › Guided reports wizard › Create from SQL and create a new SQL report. In this case, we’ll name the report as “List title with number of copies filtered by subject”, add a note that says – “A report written at the request of Mr. Gautam Mukhopadhyay, Chandrapur College, BWN”. The SQL will be as given below. The report once saved, will allow us to run it.
SELECT 
 (@row:=@row+1) AS `S/N`, 
 gmData.Title, 
 gmData.Author, 
 gmData.Copies, 
 REPLACE (@TargetSubject:=<<Select the subject|SUBLOOKUP>>, '%', '') AS Subject, 
 gmData.biblioid AS `Biblionumber` 
FROM 
 (SELECT
 biblio.title AS Title, 
 biblio.biblionumber as biblioid, 
 ExtractValue(biblioitems.marcxml,'//datafield[@tag="245"]/subfield[@code>="c"]') AS Author, 
 count(items.itemnumber) AS Copies, 
 ExtractValue(biblioitems.marcxml,'//datafield[@tag="650"]/subfield[@code>="a"]') AS Subject 
 FROM 
 items 
 LEFT JOIN 
 biblioitems on (items.biblioitemnumber=biblioitems.biblioitemnumber) 
 LEFT JOIN 
 biblio on (biblioitems.biblionumber=biblio.biblionumber) 
 GROUP BY 
 biblio.biblionumber 
 ORDER BY 
 biblio.biblionumber) as gmData, 
 (SELECT @row := 0) r 
 WHERE Subject LIKE <<Re-select the subject tag|SUBLOOKUP>>
Let us take a moment to understand what this piece of SQL syntax really means.
(@row:=@row+1) AS `S/N`,

and

(SELECT @row := 0) r

The use of the @row variable and the counter (@row:=@row+1) gives us our “serial number” column in the report listing.  We can also see the authorized value list “SUBLOOKUP” that we had defined earlier referenced here in the SQL.

NOTE: As you may note, we are asking the user to select the subject *twice*, (first time: ‘Select the subject’ and second time: ‘Re-select the subject tag’). While theoretically we should not be required to do so, thank to the use of the runtime variable @TargetSubject, in reality we ran into a type casting error (see below), thus we used this less than pretty way of asking the user to select the subject twice, to get our job done.

gmreport_03

Step #4 – Running the report

After the report is saved, it is now time to run it, using the “Run report” option. What we’ll see now will be like this:

gmreport_04

We need to select the *same* subject from both the drop-down lists and click on “Run the report” button. Selecting “Economics” we shall in our case get the following report:

gmreport_05

Step #5 – Prettifying the custom report user interface

Having the user to select the subject twice is cumbersome as well prone to human error, so we decided it is time for some jquery magic to streamline this and leave the users with one only a single drop-down to choose from. For this we’ll turn turn to the IntranetUserJS system preference and add the following jquery snippet:

 $("label[for='sql_params_Reselectthesubjecttag']").hide()
 $('#sql_params_Reselectthesubjecttag').hide();
 $("#sql_params_Selectthesubject").change(function() {
   var subval = $('#sql_params_Selectthesubject').val();
   $("#sql_params_Reselectthesubjecttag").val(subval);
 });

If this is the first time you are hearing about the IntranetUserJS system preference, you should definitely read up this. Those of you who are indeed familiar with IntranetUserJS, all we are doing here is to (1) hide the second subject selection dropdown and its label and then (2) we are defining that whenever the user chooses a value from the *first* drop-down, the second (and now hidden) drop-down should also have the same value selected automatically. After saving the IntranetUserJS update, on running the report we shall see this:

gmreport_06

And bingo! We are done!
Extraa Innings: To see the actual report in action
  1. Go to the URL https://demo-staff.l2c2academy.co.in/
  2. Use User name / Password : demo / demo
  3. Go to the section Home › Reports › Guided reports wizard › Saved reports
  4. Select “Run” from the “Actions” dropdown at the right.gmreport_07
  5. Play with the subject selection options to see the different outcome.

 

 

MarcEdit QuickTip #3 – Getting your 952 (items / holdings data) field in place for import into Koha

Shows how to de-duplicate a .mrc file, by merging duplicate bibliographic records spread all over the file and then gather up the holdings record into repeatable 952 field that Koha expects its for item records.

Last night Pawan Sharma, a fellow user on “Koha Users” reached out for some help with importing his items into Koha. Like many other, he too had moved his catalog data from Microsoft Excel spreadsheet to MarcEdit utilizing MarcEdit’s “Delimited Text Translator” feature which at the end of the process had given him a .mrc file. This he proceeded to upload into Koha by using the More > Tools > Catalog > Stage MARC records for import option.

There were no surprises here, *except* that for every single books with multiple copies Koha imported each of the copies as a separate biblio record, instead of a single entry for the biblio with multiple item records attached to it via Marc21 952 repeatable field that Koha uses for managing holdings data. Simply put his data needed to be de-duplicated with the holding data merged back before import, typically using the ISBN number of the records (MARC field 020).

NOTE: If you wish to read more about Koha’s holding records schema see “Holdings data fields (9xx)” from the Koha Community wiki.

For someone who has not done this before, MarcEdit’s de-duplication and then merging it can seem like a daunting task. This post will hopefully demystify the process.

The discussions on Koha Users were based on a lot of assumptions, especially with no idea about Pawan’s data. So, I offered to take a look at it. He first sent me a .mrc file that had 12806 records, which I immediately converted into MarcEdit’s MarcBreaker mnemonic, human readable format.

marcedit_01

And proceeded to take a “Field count” report (see under “Tools” menu of MarcEditor) to check exactly how many records had ISBN (MARC21 field 020) out of the total number of records.

marcedit_01A

The result as can be seen above – NOT A SINGLE ONE of the 12806 biblio records had an ISBN number! Well, this file can be de-duplicated and merged, but *not* using MarcEdit. Only being told about this Pawan mentioned that he had other .mrc files that had ISBN and so he sent a second .mrc (LG-32016-32979.mrc) file over. Turns out of the total of 965 biblio records in this second file, 828 records had ISBN numbers defined.

marcedit_02A

The next task was to extract the records that *had* ISBN numbers. The remaining 137 can not be dealt with in this process and will have to be dealt with separately. For now, we closed the file LG-32016-32979.mrk file with 965 records and went back to the MarcEdit main window in order to use the “Delete Selected Records” option available under Tools > Select MARC Records

marcedit_02B

The next few steps are simple, if not immediately apparent to a new user of MarcEdit. We’ll use the numbered markers on the screenshot to explain it in steps. First, we selected the LG-32016-32979.mrk file with the 965 records in step #1; next we typed in 020 (since we want to match for ISBN) in the Display Field option (by default it shows 245$a); third step was to click on “Import File” button. After the file is imported (takes just a second or two depending on your file size) this the top-left data grid which was blank so far, will show up data similar to this. Finally in step #4, we will click the “Does Not Match” link. Records that do not have an ISBN number will be selected just like the big red arrow here shows.

marcedit_02CThe last step is to click on “Delete Selected”, this will open a File Save dialog with the title “Remaining Records”. In the case, we provided the name LG-32016-32979_ISBN.mrk and saved it and exited from this deletion utility.

This file LG-32016-32979_ISBN.mrk now has the 828 records with ISBN numbers and each of which has a holding records. This is what we will work with for the deduplication process.

marcedit_02D

Using the Tools > Record Deduplication option of MarcEditor, we will now remove the duplicate records into a separate file and save it with the name LG-32016-32979_ISBN_DEDUP.mrk. We will use ISBN as the field to use to identify duplicates. A popup showed us that 828 records processes, so we are done with deduplication. We will also need to save our original work file LG-32016-32979_ISBN.mrk. This file now contains biblio records with unique ISBN number. A quick check with the Fields Count tool showed us there were now 523 records (down from 828 records originally, the rest 305 records are the duplicates that are now saved in LG-32016-32979_ISBN_DEDUP.mrk).

marcedit_02E

 Now for the next step MARC Merge, which was the last step in this process. We have to go back to the main MarcEdit window and use the menu option Tools > Merge Records. The order of files we specify here is highly *important*. The “Source File” in this case was LG-32016-32979_ISBN.mrk (the file with the 523 records with unique ISBN numbers), the “Merge File” is LG-32016-32979_ISBN_DEDUP.mrk (the file where we had removed the duplicates to in the previous step) and finally, “Save File” is simply the name of the new merged file we are going to create (Hint: this is the final file that we will push to Koha). We named the final file as LG-32016-32979_ISBN_MERGED.mrk. The Record Identifier is of course 020 (i.e. ISBN number) and we move on the next screen.

marcedit_02F

This is next step is basically *everything* we have been working for in this post so far, we select the field to merge in from “Merge File” into the “Source File” and click next.

marcedit_02GIn this case everything went well and we were presented with the following screen that said “Merge Completed” and gave us the full path and filename to our merged file LG-32016-32979_ISBN_MERGED.mrk.

marcedit_02H

Of course we opened up the LG-32016-32979_ISBN_MERGED.mrk file in MarcEditor. The first thing was to check the Field Count report, and this is what we saw 523 biblio records with a total of 828 holding records, which sounds right! Below is example of the merged holdings.

marcedit_02I

Of course there is still the task of exporting the MarcBreaker (.mrk) back to .mrc so that Koha can ingest it for its MARC21 staging workflow, but everyone knows that 🙂

NOTE: For reference to this tutorial I’m attaching the zip file containing all the LG-32016-32979 files used in this example.

Planning to bulk import your patrons? Make sure you do not have in-line line breaks in your data.

In-line line breaks in a CSV file can really send your Koha patron import script into a tailspin. Here is what you need to watch out for and the couple of other gotchas which will make you upgrade your Koha instance if the version you are using is less than 3.22.7.

Last week a friend working at a local college approached me for a spot of help. He was trying to import his patrons into Koha but was failing miserably. After he nearly got his head snapped off (Me: Do I look like I’m in the fortune telling profession???) he agreed to send over his data – an MS-Excel sheet for me to take a look at.

I pulled up a 3.22.6 instance I had laying around and tried to import his data. Quite expectedly, there were errors galore and the pretty much the same ones he was complaining about.

blog_patron_1

Hang on! categorycode, branchcode and surname fields were NOT missing in *any* single record. So what was going on here??? The most interesting to note here is that patron importer script said :

272 not imported because they are not in the expected format

272 records parsed

Now this was really something as the total number of student records in that patron uploader CSV file were only 144. So where does the number 272 come from?

The answer to this was easy to find. My friend’s data had several records in a rather bad shape – they had embedded line-breaks within the cell. I’ve highlighted the first few of the badly formatted cells with yellow in the screenshot below.

blog_patron_3A

So, I copied the first 28 records over to a new file, ran a hackish utility script to clean out the line breaks and saved these 28 records as a new file and proceeded to upload it. This time of course “the fat lady sang”[1] i.e. the records got imported nice and we were done!. 😀

blog_patron_3NOTE:  Of course while doing that we encountered a few Koha bugs as well – Bug id 15840 and Bug id 16426. The work-around mentioned in comment #16 of the latter bug, by Koha QA Manager Katrin Fischer holds good, in case you get stuck here and can’t immediately upgrade. Otherwise to avoid these to bugs, your real option is to upgrade your Koha instance, something that I’m going to recommend to my friend (aside from him fixing his data).

Reference: [1] Wikipedia “It ain’t over till the fat lady sings”

 

Easy peasy way of automating remote backup on Google Drive for your Koha database

This post discusses how to automate your Koha ILS’s MySQL database backup on to Google drive and send an email when it is complete. It shows how you can take advantage of Google Drive’s 15GB space for free (Dropbox only gives you 2GB on the free access) and do it all from the command line and save the much needed RAM for your Koha server rather than waste it on the GUI, which is also a security risk. Further this attempts to introduce the novice readers into details of the commands they are supposed to follow, with further reading resources, should they be inclined to learn more.

Having your Koha ILS database to be regularly backed up on to remote, cloud storage is an excellent idea. By doing so you ensure a critical off-site, disaster recovery measure, which is very good. However, as with all things human, if we leave it on ourselves to do it, there will come to pass a time when we will (a) forget to do it or (b) be unable to do it for some
reason. As we all know good ol’ Captain Murphy’s Law[1] will strike us whenever we are least prepared; in this case typically that one time we forgot or were unable to take the backup, the darned thing will crash!

So backup automation is key. Not only it ensures regularity without fail. It also removed one more essential chore from our immediate plate, thus leaving us free to do other things without feeling guilty over this key housekeeping chore.

Cloud backup – Google vs Dropbox

Dropbox and Google Drive comes across as immediate choice of cloud based backup. However, their free editions differ [2]… only by about 13GB of space between them. So for long-term online backup Google Drive is the de-facto choice.

Our objective

So, here is what we set out to do:

  1. create a datetime stamped backup of the database; (so we can tell just by seeing the filename when the backup was taken)
  2. compress it with bzip2 utility; (so all those loooooong lines of SQL text do not take up so much space, a text file can compress up to within 10% of it original size)
  3. upload it to a specified folder on Google Drive; (so that all our backups remain in one place, date-wise)
  4. email the user that the remote backup process is complete. (so when we outside or on vacation and don’t have access to our workstation, we still get a notification when it was completed and if we don’t get one, then that something certainly went wrong and someone should do something about it)

And of course, since we are talking about making this happen everyday at the same time, we need to create a cron job that will deliver all of 1, 2, 3 and 4 to us in a single neat little command.

As you all know, no self respecting system administrator will ever be caught running the X11 windowing system on a production server. So we are going to do these the way real system admins do: from the command line.

NOTE: X11 is the geekspeak for the Graphical User Interface (GUI) environment we see e.g. when we log into an Ubuntu Desktop (which is typically the Unity desktop)

Command line in this day and age? Are you nuts???

No! And here is the reason. X11 is not only an inherently insecure protocol that puts your production system at risk, it is also (compared with a command line only system) a tremendous resource hog! We all know that more free memory (RAM) is usually-a-good-thing ™, so instead of wasting our precious RAM on running a GUI (and all the unnecessary software along with it making it slow *and* insecure) we are going to show you how to do this all from a command line. One other thing: if you ever need the assistance of an expert, you will find that command line setups are also easier to debug (for an expert), after all, aren’t they always asking you to check your “logs”? All those are after all command line output. So like the Chloromint ad below, please don’t ask us again why we love the command line! 😉

Preparations

We want a normal user account with no admin privileges; say in our case we will call it l2c2backup and we will do it from the terminal using the adduser l2c2backup command. See below:

blog_01

Next up, we need to switch over to the new user account and create a synchronization folder for Google drive.

blog_02

At this point, we’ll press “Ctrl+D” and exit from the l2c2backup user and come back to the root user or sudo user, for we now need to install a command line google drive client on our system. We are going to use the (almost) official Google Drive command line client for Linux known simply as “drive” and available from https://github.com/odeke-em/drive

Since we are using Debian, we have the advantage of using the pre-built binaries, which we shall install in the following manner by executing in turn each of the commands:
# apt-get install software-properties-common
# apt-add-repository 'deb http://shaggytwodope.github.io/repo ./'
# apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 7086E9CC7EC3233B
# apt-key update
# apt-get update
# sudo apt-get install drive

NOTE:If you are using Ubuntu or other mainstream Linux distributions, you can use the instructions given here on the Platform Packages page.

Once we have completed installation of “drive“, we now need to go back to our /home/l2c2backup/gdrive folder as the user l2c2backup and initialize the sync folder (i.e. /home/l2c2backup/gdrive) using the command “drive init

blog_03

Copy the really long URL that the command tells you to visit and open it in your web browser. You will see an application authorization dialog screen come up, click on the “Allow” button.

blog_03A

NOTE: Before pasting the URL, you must make sure that at this point you are logged in into the actual Google user account where you want to send the backups to. Don’t make a mess here.

Assuming you did everything as I have mentioned so far, you will be automatically redirected to the page with the authorization key. It will look pretty much like the one below. Of course, every request will generate a separate access authorization key, so use the one generated specifically against your request.

blog_03B

Copy this key and paste it back at the prompt in your terminal window and press <ENTER>. DO NOT TRY TO TYPE IT OUT BY HAND, COPY-N-PASTE IS THE ONLY WAY HERE!

If you have done everything alright then you should be back at the command prompt without any error or any other message. Your sync folder should now be ready.

Putting our solution together

Now that we have the Google Drive sync ready, it is time to look at each piece of our basic requirement.

1. Creating a datetime stamped backup of our database

First we need to create the name of our output file for the MySQL backup. For this we shall use this: BACKFILE="<dbname>.$(date +"%Y%m%d_%H%M%S").sql;. The date format will give us a datetime string formatted as “20160723_000001” when the date & time is 12:00:01 (AM) on 23-July-2016. For this example, let us assume that the BACKFILE environment variable will hold the value: koha_ghci.20160723_000001.sql.

Note: replace <dbname> with the actual name of your Koha database, which in our case is koha_ghci. So, the syntax for us looked like: BACKFILE=koha_ghci.$(date +"%Y%m%d_%H%M%S").sql;. If you want to learn more about the format specific to the date command, you can read up this.

Next we will create the actual db backup using the datetime stamped output filename we just created. For that mysqldump -u<mysql_db_username> -p<mysql_db_passwd> <dbname> > /home/l2c2backup/gdrive/$BACKFILE.

Note: replace the <mysql_db_username>, <mysql_db_passwd> and <dbname> placeholders with your actual values. In our example case, the actual backup command string looked like this: mysqldump -ukoha_ghci -pASx2xvercbHXzs2dP koha_ghci > /home/l2c2backup/gdrive/$BACKFILE.

2. Compressing our SQL export

The previous step had exported our koha_ghci database as koha_ghci.20160723_000001.sql. We shall now compress this with bzip2 /home/l2c2backup/gdrive/$BACKFILE, which will give us the compressed file koha_ghci.20160723_000001.sql.bz2

3. Upload the compressed SQL backup to Google drive

Before we proceed with the actual upload, we should create a dedicated directory *on* our actual Google drive to store our backups. Lets call this directory as DBBACKUPS and create it on our online Google Drive space. It should be mentioned here that the command for upload using this library we are using, takes the form of drive push --destination <remote_folder_name> <full_path_to_compressed_file>. This code will ask for confirmation and we need to pass “Y” for yes before it will proceed. So we need to take care out that by adding echo Y | before the drive push command.

So in our case it will be echo Y | drive push --destination DBBACKUPS /home/l2c2backup/gdrive/$BACKFILE.bz2

Note:If you wish to learn about the other various options you can additionally use with drive push, I suggest you read this for the details.

4. Sending an email when the upload is done.

We are not running a dedicated, full fledged mail server like say Postfix on this box. Rather we are using the lightweight mstmp-mta with our Gmail account as the mail relay. If you want to know how to configure it, I suggest that you read this tutorial, ignoring the “mutt” part which you do not require. It is very simple. We had email sending working in under a minute. That’s just how long it took use to configure it.

Note: Just remember you *must* have openssl installed otherwise you will never be able to talk to GMail. And also you will need to go to your Google account and enable support for that Google likes to call “less secure apps” (which means any app that does use Google’s OAuth2 protocol for authentication. You will be authenticating over TLS and it is a perfectly safe thing to do, so just ignore Google’s ominous tone and enable “less secure apps”.

Now that we have msmtp-mta up and running, we will send out that email using this: printf "To: <recipient_email_address>\nFrom: <your_gmail_address>\nSubject: <dbname> db backed up on GDrive\n\nSee filename $BACKFILE.bz2 on DBBACKUPS folder on Google Drive of <your_gmail_address>.\n\nBackup synced at $(date +"%Y-%m-%d %H:%M:%S")" | msmtp <recipient_email_address>

In our case that happened to be printf "To: monitoring@l2c2.co.in\nFrom: indradg@gmail.com\nSubject: KOHA_GHCI db backed up on GDrive\n\nSee filename $BACKFILE.bz2 on DBBACKUPS folder on Google Drive of indradg@gmail.com.\n\nBackup synced at $(date +"%Y-%m-%d %H:%M:%S")" | msmtp indradg@l2c2.co.in.

5. Putting it all together

Now that we have all the parts of the puzzle in place, it is time to assemble it into a single piece. And the way, it worked for us was BACKFILE=koha_ghci.$(date +"%Y%m%d_%H%M%S").sql; mysqldump -ukoha_ghci -pASx2xvercbHXzs2dP koha_ghci > /home/l2c2backup/gdrive/$BACKFILE && bzip2 /home/l2c2backup/gdrive/$BACKFILE && echo Y | drive push --destination DBBACKUPS /home/l2c2backup/gdrive/$BACKFILE.bz2 && printf "To: indradg@l2c2.co.in\nFrom: indradg@gmail.com\nSubject: KOHA_GHCI db backed up on GDrive\n\nSee filename $BACKFILE.bz2 on DBBACKUPS folder on Google Drive of indradg@gmail.com.\n\nBackup synced at $(date +"%Y-%m-%d %H:%M:%S")" | msmtp indradg@l2c2.co.in

Note: The reason we used the “&&” is that in BASH it stands for what is called as “Logical AND”. In simple English this merely means that unless the previous command is not not executed successfully, whatever comes next simply won’t execute.

A BASH script and a cron job

We placed this one-liner script that cobbled together into the following BASH script which we named as “backuptogoogle.sh” and placed it in the folder /usr/local/bin after setting its execution bit on with chmod a+x /usr/local/bin/backuptogoogle.sh

#!/bin/bash
BACKFILE=koha_ghci.$(date +"%Y%m%d_%H%M%S").sql; mysqldump -ukoha_ghci -pASx2xvercbHXzs2dP koha_ghci > /home/l2c2backup/gdrive/$BACKFILE && bzip2 /home/l2c2backup/gdrive/$BACKFILE  && echo Y | drive push --destination DBBACKUPS /home/l2c2backup/gdrive/$BACKFILE.bz2 && printf "To: indradg@l2c2.co.in\nFrom: indradg@gmail.com\nSubject: KOHA_GHCI db backed up on GDrive\n\nSee filename $BACKFILE.bz2 on DBBACKUPS folder on Google Drive of indradg@gmail.com.\n\nBackup synced at $(date +"%Y-%m-%d %H:%M:%S")" | msmtp indradg@l2c2.co.in

We setup a root user cron job with crontab -e and adding the following line and saving it.

@daily /usr/local/bin/backuptogoogle.sh

Note: The @daily shortcut will execute our script exactly at mid-night everyday. If you want to know what are the other useful cronism shortcuts, I suggest you read this useful post by my Koha colleague and good friend D. Ruth Bavousett over here.

Backup automation from command line

If you have been able to follow the instructions by suitably modifying them to your specific settings, you have just achieved backup automation from the command line. Like I said… It’s Easy Peasy!!! 😀

References:

[1] https://en.wikipedia.org/wiki/Murphy%27s_law

[2] http://www.cloudwards.net/dropbox-vs-google-drive/#features