Archive for the ‘Computers and Technology’ Category

Amazon Reviews and Timestamps

Sunday, January 31st, 2016

Recently, I followed a twitter link that landed me here: k-lytics.com which page is a tutorial for authors about how to understand Amazon links and lessen the risks of review removal. Although the main take away, which is it’s better to use a link that contains only the ASIN, the tutorial is wrong on just about every technical fact.

It’s hard to know where to start when everything is so . . . wrong.

Note: I’ve gone in and clarified when I’ve realized that I wasn’t specific enough or I used language than means something different to a developer or database admin than it likely does to a person who isn’t either of those things.

The TL;DR

Amazon URLs don’t identify the person who did the search so Amazon is not using incoming links as a criteria for review removal. The value of qid= in a URL does not assist in distinguishing the user account. While there are reasons to use a “clean” Amazon URL in your links, identification of your Amazon account as the link source is not one of them.

Credentials

Before I continue, and on the off chance that someone who does not know my background reads this post, here’s a statement of my technical credentials:

I am a former web developer. I have worked in dev-ops. (Technically, I think I still do, but for a much smaller company without scheduled product release cycles.) I am a SQL Server DBA and data architect. It has been my job to design and maintain the database back-end for commercial, enterprise web applications. I have attended daily meetings with software architects and developers where my responsibility was to head off boneheaded code and bad database designs or to design such structures for them. My current job is with a much smaller company, but the skill set is still required.

When someone starts talking about interpreting URLS and particularly about databases, this is squarely in my technical expertise. Especially the database stuff.

The Actual Problem

Amazon has identified relationships between the poster of a review and the creator/seller of the product as reasons they will remove reviews. The exact words (see quote infra) are “perceived to have a close personal relationship” or “a direct or indirect financial interest.”

In order to establish these things, Amazon has to connect a given Amazon account with one or more external accounts. More on that later.

And so, you might think, of course a link on a third party site is an external thing that might create the appearance of a relationship. But link clicks would be a remarkably inefficient way of deriving that information.

A link, sitting on your website, or facebook, or twitter, or pinterest, gets clicked on by someone and that someone ends up at Amazon and they buy your book. This is not behavior that Amazon, or anyone who sells stuff on the web wants to discourage.

We know that Amazon has removed reviews from readers who love an author and post reviews of every book that author writes. That is because they have identified that the reader has done something such as like the author on Facebook, and that there is, therefore, an outside, personal, relationship between the reader and the author. Facebook, as do other social media companies, provides a wealth of information about who likes what/whom. That information is pretty easy to find out. The contents of a link URL are irrelevant to that determination.

That relationship is NOT contained in the URL.

If you have an Amazon link on your Facebook page and someone clicks on it and buys your book, the smoking gun isn’t the URL string of the link. It’s the referrer information that tells amazon that the click came from your Facebook page or profile (And if it’s from your profile rather than your author page, then you are likely asking for a false positive).

An Amazon URL is a Dumb and Inefficient Way to Infer Relationships

Amazon is unlikely to be using URL strings from incoming third parties (your website, facebook etc.) to figure out which reviews are suspect. They surely are interested in incoming links, but not as implied in that article. Parsing URLS for such information would be a strange and inefficient way to get that information, especially when third parties make it easy to mine far more relevant data.

Carolyn’s Theory

Personally, my theory is that authors who are using the same email address for Amazon, writing- related social media activities, and their personal lives, are more likely to run into problems with Amazon incorrectly deriving personal relationships where none exist. I suspect that logging in all over the place with a Facebook account only exacerbates the issue. Using the same email address at Amazon, Facebook, Twitter, etc, or filling in alternate email contacts at those sites where that alternate email is the same as your Amazon email account are going to make it really easy for Amazon to find connections and come up with real, or incorrect, derivations of actual problematic relationships. Especially if you haven’t locked down your privacy settings.

The Technical Problems with the Analysis

Right. So, the claim is that a link containing stuff besides the ASIN is sufficient to invoke a review removal.

Amazon knows if a reviewer bought the item they’re reviewing. And they surely know what events led to the reviewer’s “buy this” click. If it’s a link from a third party site, then they have whatever information is in the referring link. However, that link does not contain the account information of the person who copied and pasted the link.

The tutorial implies that a qid value, should it exist, is sufficient to identify the account that created the link. That is false.

The qid value is an Epoch timestamp — the number of seconds since January 1, 1970. This value is precise only to the second. That right there tells you there’s a huge problem with the analysis in the tutorial.

The idea that the qid value provides enough information to identify the account that made the search is just … wrong.

The addition of the qid value does not and cannot guarantee uniqueness of the string. It is possible for two people make the exact same search at the exact same second and click on the top result at the same second. In such a case those URLs will be identical.

How a DBA Gets Fired

Uniqueness is a key component of database design. If the data architect gets this wrong because they fail to account for the possibility of collisions where two objects cannot be distinguished from each other, they’re going to be out of a job.

Unique Snowflakes MUST Exist

It is actually impossible to have uniqueness on a timestamp that is precise only to the second. I imagine many people unfamiliar with such concepts think that precision to the second is pretty darn precise. But in this context, it is not. It’s also not precise enough for things like the Olympics, by the way.

When the ability to uniquely identify something is required, you don’t choose imprecise values to achieve that.

Frankly, this is a dumb discussion. If you want to track search queries by account this isn’t how you do it.

The qid does have a useful purpose, but it’s not identifying the user who made the search.

Let me remind you that when Amazon needs to know what user account referred an incoming link, they don’t say, “No worries, we have that in every URL!” What they say is, sign up for an associates account so we can give you a uniquely identifying string that tells us the link came from you.

More Problems

The tutorial goes on to state that the number of times the link was clicked on provides evidence of author manipulation. No. Mere clicks on a link are evidence of popularity of the content and the popularity of the author. If number of clicks alone was evidence of manipulation then popular authors would disproportionately suffer from such a system. Further, if that were true, then no author should ever use associates links.

Additional information is needed in order to infer manipulation and that information is not in an Amazon URL.

I think it’s pretty ridiculous to think that Amazon would take punitive actions based on data that does not identify the account that made the link. The implication that it’s the qid portion of the URL, is, in a word, bullshit.

Here’s what Amazon says about its policy (found here):

Authors and artists can add a unique perspective and we very much welcome their customer reviews. While we encourage reviewers to share their enthusiasm and experience, there can be a fine line between that and the use of customer reviews as product promotion. We don’t allow anyone to write customer reviews as a form of promotion and if we find evidence that a customer was paid for a review, we’ll remove it. If you have a direct or indirect financial interest in a product, or perceived to have a close personal relationship with its author or artist, we’ll likely remove your review. We don’t allow authors to submit customer reviews on their own books even when they disclose their identity.

And here’s a few of the items that prompt removal:

  • A product manufacturer posts a review of their own product, posing as an unbiased shopper
  • A customer posts a review in exchange for $5
  • A family member of the product creator posts a five-star customer review to help boost sales
  • An artist posts a positive review on a peer’s album in exchange for receiving a positive review from them

For that last one, substitute “author” for “artist” and “book” for “album.”

There’s very, very little in any Amazon URL that provides any of that information.

It’s not the purchase that is suspect. Amazon knows who bought what. Amazon is saying there is a non-commercial, personal relationship between the poster of a review and the author.  The URL doesn’t provide a smoking gun of “These people are buddies outside this commercial transaction!”

What people suspect Amazon is doing for the purposes of determining those relationships is examining things like connections between Amazon accounts (Kindle sharing, mailing addresses, etc) or links between Amazon email addresses and possibly IP address that indicate that one person is posting under multiple identities.  They’re also believed to be looking at other social media accounts, including Facebook and places where unwise authors might obtain insincere reviews, such as Fiverr, including taking legal action against those services. Gifting a book to a reader is something that appears to trigger an issue with a subsequent review.

Even More Problems

If you listen to that tutorial, you’ll come away thinking several incorrect things.

The tutorial implies that the qid, which is a Unix Epoch timestamp (the number of seconds since January 1, 1970) is a unique identifier. This is so false I immediately lost track of the tutorial because I was all wha??? (No worries! I listened three times to get their statements straight.) It manages to also imply that the qid somehow identifies the user making the query. That is also false.

It makes a big deal of demonstrating that a qid value changes over time. Um, doh?

Wrong about Short Links, Too

Then the tutorial talks about short links and it implies that using a short link will strip the identifying data from a copied Amazon URL. That, too, is false. Whatever is contained in the source URL that you paste into your short link destination will be used to resolve the destination of the click.

So, suppose you use bit.ly/mybook  as the link you post at FB.

When someone clicks on your FB bit.ly link here is what happens:

The user goes along for the ride to bit.ly where bit.ly looks up the destination you gave it for bit.ly/mybook (this happens really quickly. The user is unlikely, but only unlikely, to notice the millisecond or so that they’re at bit.ly.)

Bit.ly sends the user to the destination you copied and pasted from Amazon. The ENTIRE URL you copied and pasted. Including any applicable qid or other search string.

Lastly, the tutorial completely omits consideration of the use of Amazon associates links. If it’s true that Amazon is using information from incoming third party links to figure out whose reviews to remove, then authors should NEVER use associate links. An associates link actually DOES identify the source of the user account that made the link. But that’s an absurd result. Amazon wants people to use their associates links.

Precision to Websites and Databases

Amazon processes millions of transactions and there are, guaranteed, many many queries that occur in the exact same second. Database systems that need to know which transaction to commit first are looking at milliseconds and nanoseconds. Therefore, a timestamp that is precise only to the second is inadequate for the identification of separate transactions. An Epoch timestamp might uniquify, but it cannot uniquely identify. And, even if it were used to add some value to a search string to make it unique, an imprecise value like that would not guarantee there would not be a collision.

Here’s what the timestamp can efficiently do: create an easy, lightweight way to compare the start time of the product search result to actions taken later. So you know something like, how long it took the user to click buy. It’s easy and lightweight because all you have to do is some arithmetic like subtract one epoch value from another.

Why You’d Want a Clean URL

Long URLS are subject to errors that break the link. Certain characters, such as spaces and ampersands, may need to be encoded so the URL is correctly parsed. You might not get the entire URL. It’s a lot of work. It’s easier to read your html and other analytics.

But it’s not because Amazon is using a qid to identify the person who created the link.

Share

Mid-Month Report in which I talk about JSON

Wednesday, January 20th, 2016

So far, my 2016 has been a success factory!

1. Operation New (to me) Desk remains on target, if not slightly ahead of schedule.
2. My keyboard tray arrived and I assembled and attached it.
3. My file cabinet/printer stand arrived, and I assembled it, put it next to the desk, put the printer on it, and then put stuff in the drawers.
4. I researched, decided, ordered and obtained a new chair. It arrived and I assembled it and I’m now sitting on it.
5. My Demon Warlord is at the proofreader.
6. I got all the files for the stand alone novella release of An Unsuitable Duchess to the person I decided to use for all my formatting going forward.
7. I asked my graphics guy to do color versions of some custom work he did for me because I’ve stopped kidding myself that I want to deal with it myself. Finals arrived today. They’re awesome.
8. I found someone to do the cover for An Unsuitable Duchess and that’s done. I’m just waiting for the final files.
9. I hire someone to do a software update for me because that’s another thing I don’t have time for.
10. Print cover for My Demon Warlord is done and awaiting final page count.

And, here’s a coincidence that I thought was pretty funny, which in a way is No. 11 only it’s not done yet. I was emailing with my formatter who mentioned that he was using JSON files for a database source and I sent him this screenshot:

screenshot of a JSON file. Code.

JSON

Because I was, at at very moment, working out the schema for my planned DB to manage my ebook info and links because I didn’t put it all in something SQL Server-ish because — I do that all day, and I didn’t want to come home and do more even though I could have bothered to learn MySQL and used that but to be honest, a NoSQL solution was way more interesting to me and I even think it’s more appropriate, because NoSQL is kind of designed for systems that are a bit fluid the way this book shit is, and now I don’t have deal with NULLs and spend more time than I want to properly architect the relational version because, quite literally, I would be sitting there going, but how would I scale this out?? if I took shortcuts that denormalized the tables. That would drive me nuts. And so.

I picked one book as my sample schema document and started setting out the data and mapping the arrays and since I have mongoDB installed on the macs I guess I’ll just put it all in MongoDB and now I’ll get my money’s worth from the time I decided to buy my own Mongo GUI tools from when I was playing around with MongoDB at the previous day job and getting SQL Server to produce JSON files for me. Supposedly SQL Server 2016 CTP 3.2.1 (which I have at the current day job)  can supposedly do this without all the shennanigans I went through in SQL 2012. So, fun, eh?

 

 

Share

Operation New Desk: Sub-Optimal Areas of Organization

Sunday, January 10th, 2016

To Avoid Hell, Organization Requires Commitment

So, some time ago, the day job negotiated renovated office space… and a lot of office equipment was offered up to employees. I snagged an adjustable desk because I want to set up a treadmill desk. I measured etc and confirmed the desk would fit in my room. I found a college student with a truck and arranged to get the desk to my house. The desk isn’t huge but it’s not light. But also, I was in the middle of trying to finish My Demon Warlord and Seduction in Winter and so I ended up having the guy park the desk in the living room until I had time to clear out drawers etc…

And there the desk stayed. For a really really long time. Because desk drawers. Because all the stuff I had on the desk… And the closet and and…. And every spare minute needed to go into writing.  Organization requires commitment of heart and time. If you stop halfway into an organization project you might as well go live in hell.

I was starting to hate the desk, taking up space in the living room on its end… A too-inviting target for small dogs.

And I was dreading the work…

But then I finished My Demon Warlord and sent it to the copy editor and whoo-hoo! My son was still home for Winter break and I finally had time to spend with him. I had him set up Apple TV when he got here and then signed up for the free trial of Netflix. And he had been watching Netflix. With my break between books he and I watched the first episodes of Jessica Jones, Daredevil, and Sense8. Then we watched several episodes of Jessica Jones and then he went back to school.

Sub-Optimal Areas of Organization

And so, there I was with a long New Year weekend and a desk taking up space and so began Operation New Desk Binge Watch. I emptied desk drawers and surfaces and other places of  . . . let’s be kind and call them “sub-optimal areas of organization” and I brought out the shredder and put all this “stuff” into boxes and watched Jessica Jones while I shredded and sorted and discarded and found new homes for things. Rinse and repeat. When I ran out of Jessica Jones, I started on Daredevil.

Then I was exhausted. But yesterday, I looked at my office with my old icky desk cleaned off but still with a broken drawer and said “I’m going in!”

Only Slightly Bruised

I got the old desk out by myself because it’s not all that heavy, just 20 years old and, well, it’s had a broken drawer for years and no holes for cables so I wasn’t even sorry to see it go out. My brother was over, and he helped me maneuver the “new” desk in. My right toenail is only slightly bruised, I swear.

Then I set up the computer and dropped the cables through the cable holes in the back and Whoo-hoo!!

Phase Two Point Five

So it turns out I need a keyboard tray for the desk, which whatever. I ordered one on Amazon last night — while I was in bed with the lights off. THIS is how technology improves our lives. Since the new desk does not have drawers, today I’m going to buy something suitable that the printer can sit on (currently it’s on the floor) with drawers so I can proceed to Phase 3.

Phase Three

Phase three is to go through all the stuff I didn’t discard and figure out where to store it. Some will go in the Phase 2.5 pedestal thingee and the rest will be relocated to …. somewhere else.

Because of the NSA case, I have a lot of documents that need to be preserved, which is an additional challenge but might as well have it all in the same box, right?

Verdict

Jessica Jones is an awesome series. I love her so hard. I didn’t love Daredevil until about 4 episodes in but then I did. I’m on Sense8 now and I’m blown away at how great this series is. Normally, when I see TV it’s something from the more traditional channels and I watch maybe 2 times a year. But these shows have women doing awesome things and people of all colors and no one is being safe about writing, especially Jessica Jones. Though, a couple of times on Daredevil I thought some of the dialogue was pretty clunky and maybe they needed to call a lawyer a couple of times to get a grounding in how the law might actually work. I’m only a few episodes into Sense8 but yay for a show that isn’t just straight white guys!

Share

Technology Woes . . . My Sob Story

Tuesday, October 20th, 2015

So, a couple of days ago my 2011 model iMac began misbehaving in a worrisome way. I got the soonest Genius Bar appointment possible and it was still too late. Yesterday it basically died. Yesterday was the same day my replacement phone arrived and with the dead iMac, my phone backup was unavailable. Because of our internet situation here (only recently resolved mostly) I never dared back up to iCloud. Not possible. So… I backed up the phone to my Macbook and for some reason it would only encrypt the backup but without ever asking me to give it a password. NEVER HAPPENED. And it wanted this nonexistent password in order to restore my backup to the new phone.

The workaround is a backup to iCloud. So, OK. Our internet is OK enough to risk it.  I started at 7:00PM and at 4:30 this morning it was still going. And at 5:30 it just quit. No error message. No nothing. Just “Your backup could not be completed” or else, no message at all. So I took it to work and tried a backup to a Windows machine. Same thing. Encrypted backup. NO opportunity to give a password. I tried iCloud again. Nope.

Finally, the nice lady at Apple wondered if I had enough space in iCloud. Well, I had no idea how big the phone backup is. Apple doesn’t tell you.

Long story short, the answer is no. Apple just swallows a “you don’t have enough space” error message and misses the upsell opportunity too.

Then I got the extra storage and that took two hours for the phone to believe I had it, and THEN the backup succeeded.

THEN the restore was stuck on “1 hour remaining” for three hours.

I went to my other office where they have super duper internet and started over with the restore and it took 20 minutes. TWENTY MINUTES!!!!!

JFC.

This was nothing but a series of error conditions that Apple should be trapping. I have suspicions about the backup though. And since I will be back on the phone with them tomorrow to explain my resolution and complain a bit, I’ll relay the possibility that iPhone encrypted backup process  doesn’t work when the drive is already encrypted.

::Sigh::

Anyway, My Demon Warlord is going well. I’m doing the final paper read-through so the dead iMac could be worse.

Share

Evergreening Your Links

Friday, September 25th, 2015

What Is Evergreening Links?

Evergreening a link means making sure the destination of a link always lands the user in the correct place, even when the correct place changes. (P.S I will probably be tweaking this page for a bit, but as of this original writing, there were a lot of people who wanted to know quickly.)

TL;dr :: use a plugin such as Redirection, or a link shortening service such as Bit.ly (likely the paid version) or an installed application such as YOURLS to manage updating the destination of links inside your books.

I am writing this in the context of eBooks where authors include links to the books they’ve written in the back of the book. However, the concept applies to any link you make.

The Basic Problem for Authors who write More than One Book

The more backlist you have, the more books you end up republishing with updated links for books that have been published since you wrote the previous ones. It’s a problem and can end up being a lot of work. But what if there was a way change the destination of existing buy links without having to edit and update books you have already published?

There is. You need to create evergreen links for your books. You do this by creating a type of link in your book that goes to an external page where you can then send the user to the updated location, a redirection, if you will.

There are Three Ways To Create Evergreen Links

There are 3 basic ways to achieve evergreen links. Some of the methods have more than one approach. Don’t worry, I’ll explain each of them. Also, some people do better when they can see a demo or a video, so don’t give up if a written explanation doesn’t quite do it for you. (Sorry, making an interactive demo involves more time than I have right now.)

  1. A plugin if you’re on WordPress or Blogger (SUPER easy!! Install the plugin and you’re done!)
  2. Build redirects at your website
  3. Use a link shortening service that allows you to update the destination of the link

Important Concepts

I assume you already understand how html links work. Even if you’ve only encountered them in the Word document you will upload to vendors, you should have encountered the need to create a link a user will click on to go someplace else. Other books you have written, for example, that you hope your readers will buy.

I also assume you are producing books customized for each of the major vendors such that in the version you upload to iBooks, all your buy links go to iBooks purchase pages. The version you upload to Amazon contains links that go to your Amazon buy pages. If you’re not doing this, you are losing sales.

I feel like I should repeat that. Backmatter links sell books. Vendor-specific links sell more books. You should have buy links in your books, and they should be vendor-specific for Amazon, iBooks, Google Play, Kobo and Nook at the very minimum. You will also need a generic version of your links. Those can go to your website.

No system is perfect (yet) but evergreening your links saves a lot of time and work.

Note: If you’re on the hosted, free version of WordPress, my understanding is you won’t be able to use plugins. Personally, while I realize that money can be an issue, this is an excellent reason to have a self-hosted WordPress install.

This is a business. Don’t leave money on the table because you’re too busy or don’t want to deal with the horror of tech. I get that, I really do. But if either of those things describe you, you can outsource the work. If readers loved your story, they WILL click those links to get more of your work.

Case Study

Assume you have written a three book series called Animals Who Talk.

Animals Who Talk Series!

  • Fred the Cat, Book 1
  • Suzy the Giraffe, Book 2
  • Roberta the Chicken, Book 3

Because you are a super fast writer, your production schedule looks like this:

Month 1: You write and publish Book 1.
Month 2: You write and publish Book 2.
Month 3: You write and publish Book 3.

The common situation is that at the time of publication, Book 1 will not contain any buy links to Books 2 or 3 because, of course, those books do not yet exist. On publication, Book 2 can contain links to Book 1 but not to Book 3. Book 3 CAN contain links to books 1 and 2.

On publication, without an evergreening system, the best you can do for Books 1 and 2 is send your readers to a webpage you set up about the series and/or each of the books. Sadly, the more clicks you put between your fans and your books, the fewer books you will sell. Commonly, this means an author will publish Book 2, wait for the vendor links to go live, then republish Book 1, which has been updated with the correct links for each vendor version of  Book 2. Then, when Book 3 is published, Books 1 and 2 are republished with updated links to Book 3. For each vendor.

An evergreening system means that all three books contain links to all the other books at the time you publish them. As vendor links go live for each of the books, you update your evergreening system (remember there is more than one way to do this!) once and only once without having to reupload ANY of your Animals Who Talk Series books.

Really Long and Detailed Explanation

You might want to skim or skip to the more technical explanations of the method below. Or you might want to read on to understand the use cases.

So, here’s my basic system:

I have YOURLS installed at cjewel.me This is not required, you can use one of the other methods, but the concept is more or less the same.

I devised a naming system for short link naming that I can remember and follow.

Using the example of a booklist in the back of books that are on sale containing a link to a book that isn’t available yet:

1. I create a page on my website in the books section of the site, for that specific book, The Adventures of Roberta the Chicken, let’s say. Below is the URL such a page would have on my website.

carolynjewel.com/books/robertathechicken.php

That page has all the information about the book as I would do for any book page on my website. This is the book’s permanent home at my website. I can update it at will.

2. Over at cjewel.me (or in my browser, either way works), I create links something like this—not my actual naming convention, I’m naming for clarity here:

http://cjewel.me/RobertaTheChicken_Amazon

http://cjewel.me/RobertaTheChicken_iBooks

etc.

I tell YOURLS that all the vendor links resolve to carolynjewel.com/books/robertathechicken.php

3. In my book Fred the Cat and in Suzy the Giraffe for my Animals Who Talk Series, in which Books 1 and 2 are on sale everywhere, but Book 3 isn’t yet, my backmatter list of books looks like this. These are links, of course:

Animals Who Talk Series!

  • Fred the Cat, Book 1
  • Suzy the Giraffe, Book 2
  • Roberta the Chicken, Book 3

In the Amazon versions of books 1 and 2 my url (link) for Roberta the Chicken is:

http://cjewel.me/RobertaTheChicken_Amazon

There is no www because the point of YOURLS is to have short links, the install process makes that clear enough, so don’t worry about that.

For my iBooks versions of books 1 and 2, my link for Roberta The Chicken is:

http://cjewel.me/RobertaTheChicken_iBooks

Currently, both the Amazon and iBooks links will send the user to my website page for the book.

so, for iBooks:
<a href=”http://cjewel.me/RobertaTheChicken_iBooks”>Roberta The Chicken, Book 3</a>

Again, recall that, currently, all the various links take you to my website page for Roberta The Chicken.

This means that when readers of Books 1 and 2 click on the Roberta link, they will end up at my website page for Roberta the Chicken where they will be told the book isn’t available yet and hey, join my mailing list to get notified as soon as it’s released.

4. Fast forward 6 months and now Roberta The Chicken is done and I’ve uploaded it to all the vendors. iBooks goes live first because they are awesome like that. As soon as I have the live iBooks links:

I go to YOURLS and edit the link cjewel.me/RobertaTheChicken_iBooks so it points to the live iBooks URL instead of my website.

From that moment forward, a reader of the iBooks versions of Books 1 and 2 who clicks on the Roberta link, will go to the iBooks page for Roberta The Chicken.

When Amazon goes live, I go to YOURLS and update cjewel.me/RobertaTheChicken_Amazon to point to the Amazon page instead of my website.

Same for Amazon, as soon as I update the YOURLS link, anyone clicking the links in the Amazon version of books 1 and 2 gets sent to the Roberta Amazon buy page.

You can make your links book-specific so you know not just that your link came from an iBooks reader, but an iBooks reader of a specific book. I advise you to think about this and devise a system that works for you:

For Fred The Cat and Suzy the Giraffe, you could make links like this if you wanted to:

cjewel.me/FredTheCat_RobertaTheChicken_iBooks <– use that link in the iBooks version of the Fred book for the Roberta link

cjewel.me/SuzyTheSwan_RobertaTheChicken_iBooks <– use that link in the iBooks version of the Suzy book’s link to the Roberta book.

You need a naming system that makes sense to you. There”s no reason you can’t use really long “short” names, but it’s more opportunities for typos. But longer tends to make more sense. If you use abbreviations, never deviate from them. It’s worth it to spend some time working out your naming system.

YOURLS is also case sensitive, so FredTheCat is different from fredthecat.

It’s more work to track book-specific urls, but then you have more granular data and more data is better! You’d know that iBooks readers of FredTheCat clicked on line to the Roberta book 500 times while iBooks readers of Suzy The Giraffe have clicked on the Roberta link 754 times. Up to you.

This way, I do a lot less reuploading of books in order to update links. As long as I’m using my cjewel.me links, I can repoint my short links to wherever I want them to go. Some reuploading is unavoidable of course. Series you haven’t thought up yet, etc.

But How Do I Achieve this Magic??

1. Plugin

If you have a WordPress driven website, the Redirection plugin is simple to use. I know several authors who are using that plugin. My site is hybrid, so though I have Redirection installed for my WordPress instance, the plugin only redirects WordPress pages, not pages on the non-WordPress portion of my site, so this is not as useful for me as it is for others. But it’s nice to have. If you’re on wordpress, I’d recommend that. You don’t need to read any farther, unless you’re unclear on the timing of the process.

2. Redirects on a non-WordPress or non-Blogger site

Depending on your webhost, you can do your own redirects, either directly in the htaccess file (assuming you’re on a Linux flavor server) or via a tool your host provides, or by building a redirect page.

If you don’t know what an htaccess file is DO NOT not use the htaccess file method. You don’t know enough to do this safely. A typo or wrong setting could disable your entire site, and really, just don’t if you haven’t mucked around in this file before.

If you’re on a windows server, well, I doubt your webhost would give you direct access to IIS. If you don’t know what IIS is, then this is also not something you should expect to do.

Most people are going to be on a linux server and if you have a good webhost, there will be a website tool that allows you to set up redirects. Mine has one that is OK enough, and I use that tool from time to time, depending on what I need to do. I explain more about this below.

2A. HTML redirects

If you have a regular website, you could also build web pages for your redirects. This is probably more work and the reporting would depend on how good your website analytics tools are. If you don’t have Google Analytics installed on your site already, get it set up (Google “Google Webmaster Tools” and you should find all the info you need.) That will help with overall analytics. Plus, if you ever have a malware issue, having access to Webmaster Tools can get your site cleared faster.

This method requires that you know now to create a webpage and upload it to your server AND realize you have to test that you did it right. Typos happen, people.  It’s not hard, but honestly, why would you want to learn to do this when you could be writing instead? Outsource it.

Here’s an example of an html redirect page:

This will display the information about an updated page, then take the user there. For evergreening, you wouldn’t want the page to wait. I built this page when I switched my site from html to php and certain files needed a manual redirect. The page “about.shtml” redirects users to “about.php” so if someone out there on the web is linking to my about.shtml page they’ll end up at the new page.

<!DOCTYPE HTML PUBLIC “-//W3C//DTD HTML 4.0 Transitional//EN”>
<html>
<head>
<title>Carolyn Jewel – About Carolyn Page Redirect</title>
<meta http-equiv=”REFRESH” content=”5;url=about.php”>
</HEAD>
<BODY>
<h1>carolynjewel.com</h1>
<p>This About Carolyn page has been updated. Please wait while you’re redirected to the spiffy new page!</p>
</BODY>
</HTML>

This line is the key one:
<meta http-equiv=”REFRESH” content=”5;url=about.php”>

The number 5 tells the page to wait 5 seconds then take the user to the url listed after ;url (that is, about.php)

You can set the number to any integer.

0 would be no wait. You’d still want to have the header and paragraph just in case someone’s browser or settings disallow redirects. Additional considerations go into deciding how to style the page and whether to provide a URL in the body, but I won’t bore you with that. For some pages where I do this for one reason or another, it looks just like my regular website.

If you elect to do this, I assume that you already understand at least something about what considerations go into building and styling a redirect.

3. Roll Your Own Short Links: YOURLS

YOURLS is free software that you install on your own domain that allows you to create and manage custom short links. Since the point is a short url you’ll need to register and host a domain then install and configure the software. I blogged earlier about installing YOURLS

Most of the process details for using YOURLS are explained above. When I needed to install an update to YOURLS I hired someone from Odesk. He was a Polish college student and did a great job for $22.00. It was totally worth it.

YOURLS comes with reporting so you can see how many times a link has registered a click, where they came from (IP address or country, and what time, etc.) There are other graphs and charts. Another advantage is that some vendors or sites have an issue with Bit.ly links because they can be used to obfuscate malware. Technically, so could a roll your own solution, but your short link domain wouldn’t be flagged unless you were a really bad person or got hacked. (Please don’t use a stupid password to secure your domain or the login to administer YOURLS.)

The advantage to a link shortening service (there are several such services) is that you can use them anywhere you want to, including Facebook, Twitter, etc, and for reasons other than book links. YOURLS includes a nifty tool that allows you to create short links from your browser. There is also a WordPress plugin that will create YOURLS short links to posts.

YOURLS is free, but I recommend you donate an amount you can afford. That would be super nice.

If you have questions, let me know in the comments and I can clarify or what have you.

 

Share

Thoughts on Kindle Unlimited and Scribd

Friday, July 3rd, 2015

Some of you may know that Amazon changed the terms of its subscription service, Kindle Unlimited (KU) such that payments due to authors with books in KU are calculated in a different manner than previously. If you’re a reader and you subscribe, you can read all you want for $9.99 a month. With the single limitation, so far, that you can have up to 10 books on your “shelf” at once. To get book number 11, you have to read or release one of those books.

With the Kindle Unlimited subscription you can access hundreds of thousands of Kindle books and thousands of audiobooks with Whispersync for Voice. You can keep up to ten books at a time and there are no due dates. Read your Kindle Unlimited books on any Amazon device, or free Kindle reading app. (Terms)

Scribd reinvented itself from a pirate site reader-centric sharing site (Irony ALERT!) into a subscription service. For $8.99 a month. They paid all authors/publishers the same as a sale.

If you’re a reader, that’s a pretty sweet deal, assuming the books you want to read are in the program.

If you’re an author, deciding whether to have a book in KU is a business decision, and not everyone’s business needs and goals are the same. Everyone’s reasons for being in or out are different. Last year when KU debuted, I blogged about it here. Here’s what I said then about how that would be profitable:

If you are paying authors/publishers a percentage of price, then for your business to be viable, that payout amount per month HAS to be less than 9.99 * (number of users subscribed).

This means a profitable user will read a number of books N per month where the payment due to vendors is less than 9.99. The more books they read, the less the wholesale price has to be (obviously), and, at 9.99 per month, the wholesale price has to be less than 4.99 for 2 books per month, 3.99 for 3 books, etc.

Not long after that post, it turned out the payment terms for traditionally published books in KU were different than for self-published books. Traditionally published books receive the same payment as if the book had been bought — that is 70% of the purchase price. Further, certain self-published authors were given those or similar terms in order to convince them to put their books in the program.

Self-published authors can only participate in KU if they put their books in Kindle Select — that is, have those books exclusively at Amazon. Scribd does not require exclusivity. For some authors, Kindle Select makes sense. But for others, it doesn’t. Doing well at other vendors or wanting to avoid the risk of having a business depend on a single vendor are good reasons not to be in Select and therefore, not in KU.

Traditionally published books need not be exclusive. Because, as Amazon recognized, that would be a non-starter.

Arithmetic

What the Romance community knew, and what I suspect Amazon knew (because DATA!) and what Scribd apparently did not know (Because why would anyone pay attention to what goes on with those books women read?) is that Romance readers are the Great White Sharks of the reading world. They are the 80 in the 80/20 rule. They are the power in a power law.

Solving for X

Remember my ruminations over profit, book prices and that monthly subscription rate? Amazon had the data that would have told them everything they needed to know about those Power Readers (before KU debuted). Amazon solved the math problem with deep pockets but also by offering self-publishers a substantially worse deal. The KU reimbursement rates started decently, then took a swift dive until the reimbursement fell to around $1.34. Why? Well, either you sustain losses because of the Power Readers or you find a way to compensate for that. Falling KU reimbursement rates were exactly that, that is, KU’s “flexible” reimbursement rates to self-published authors was their hedge.

As KU continued, Amazon kept talking about how much money they were putting into the monthly fixed KU pool to be distributed to the self-pubbed authors, but reimbursement rates from that pool continued to fall. Because the hedge was needed. (So I speculate.) Scribd had no such hedge in its business model. (To my knowledge, anyway.)

How did Scribd solve for X? They didn’t. It’s hard to understand why Scribd thought $8.99 for all readers was viable even in the medium term. If they knew about Power Readers then they either didn’t know enough or they thought the same thing most of the traditional world thinks about products for women. How could they possibly matter when they were up against REAL books and REAL readers?

$8.99 is a brilliant strategy for competing for potential KU subscribers. It’s not a brilliant strategy for paying authors/publishers in an environment that includes Power Readers. The rational solution after the short to medium term is to introduce tiered subscription rates. It’s blazingly obvious that in an environment that includes Power Readers you must also have a bazillion 1-2 book a month readers or you have to charge Power Readers more. Or you have to pay authors/publishers less. Scribd did a great job going after traditional publishers, and they probably had a better selection of books than Amazon. And, by the way, the word is lots of Power Readers (those sharks!!) had subscriptions to both services. Because the pool of books was different.

But if they charged those readers more, then KU looks more attractive… It’s a tough situation.

Solving for Y by Killing X

Scribd’s solution was to remove 80-90% of Romances from their service.

Sure. Of course. Now they will be paying out less to authors and publishers because the books people women actually want to read are gone. Now that they’ve basically told the Power Readers they are unwelcome with all their womanly reading of THOSE books—who the hell knew they read that much???—what they have left are the 1-2 book a month readers.

This makes a certain sense. Because maybe what will happen is the Power Readers keep their subscriptions to both Scribd and KU, but now only borrow 1-2 books from Scribd and things are sustainable for a bit longer for them. Yes, an FU to romance readers, but Scribd maybe wasn’t in a position to feed the sharks.

If I were a Romance publisher ::cough::Harlequin/Avon::cough:: who just put substantial backlist into Scribd only to have their reader base told to fuck off, I think I’d be pretty pissed off.

The more established self-publishers, the ones who cannot afford Amazon exclusivity financially or at the cost of reader-relations will likely move to Oyster in order to have some presence in a subscription system. I wonder if Oyster knows what’s coming their way?

Cue the theme from Jaws….. LOOK OUT OYSTER!!!

Segue

Early on, long before KU, I put one book into Select into order to have data on the program. I asked my newsletter subscribers to tell me what they thought about my decision. Their answer? The non-Amazon readers were angry. Rightly so. That was enough for me. My experiment was done after the first angry letter. (After 90 days, you can elect not to re-enroll in Select.) If it had been possible, I would have ended it immediately, but I had to wait out the 90 days. I sent a copy of that book to every single reader who let me know how they felt.

Amazon’s Adjustment

The initial structure of KU with its fixed reimbursement pool meant that a longer book that make $2-4.00 for a sale, made $1.34 in KU. Shorter books, on the other hand, that would be sold in the $0.99-1.99 range and thus net the author a dollar or less, made $1.34 in KU. In other words, a book priced at $0.99 made $1.34 in KU. Anyone with half a brain can see that this meant shorter books were way more profitable and that longer books were way less profitable.

The adjustment Amazon made was to address that disparity. Instead of paying the same amount per borrow regardless of length, authors are now paid based on pages read. “Pages” read, actually. Basically, Amazon had to normalize what a page means for a digital book when displays are reflowable and resizable across different sized devices. A “Kindle Page” is the same for all devices regardless of settings. (Presumably, of course.)

To me, that’s fair enough. Authors who write shorter books make up the difference by writing more books. I should think that’s obvious, though apparently not. Category authors tend to write more books than single title authors. Three 30K word books will make you the same as one 90 page book, assuming the books are read all the way through.

I have to shake my head at the suggestions from some that readers should make sure to page through shorter books, because otherwise those authors are screwed.

No they’re not. They’re only screwed if readers never actually finish the books, and if readers aren’t finishing their books, well, maybe those authors should worry about why that is. There absolutely is a market for shorter books and short stories. Just like there’s a market for longer ones. I have short stories, novellas, and novels on sale. They achieve different goals for me. I’m quite sure that readers have different goals and preferences for reading works of varying lengths.

Final Thoughts

I don’t have any books in KU. I did have books in Scribd, but I assume the only thing left is Scandal, which is currently free and so would not have been removed. I’ll probably go pull Scandal because I’m vindictive that way.

But now I’m kind of wishing I did have something in KU because at last at LONG LAST Amazon is giving authors data about how much of their books get read, but the only way to get it is to be in KU. I had this idea that authors could put a book in KU, let it sit for 90 days and watch the data about pages read. You’d rewrite if no one gets past Chapter 10. ::snort:: Mostly I’m kidding.

[Update: MelJean Brook pointed out that Amazon is NOT providing meaningful page read metrics so my plan would not work. There is no way to tell from the data provided if 2000 Kindle pages read is 2000 people reading one page or one person blowing through 2000 pages of an author’s work.]

I Lied. This is the Final Thought

I was talking to a friend the other night about why Amazon didn’t fix their issue sooner since they surely had the data about the problem of shorter works no later than 6 months in. Assuming that’s true, that gives them 6 months to develop, test, and QA and then prepare the PR for the Kindle Normalized Pages scheme. This is aggressive but doable. You’d have to test a lot of scenarios and then make absolutely sure all the calculations are correct and reach consistency.

Maybe the schema changes weren’t as big a deal as they would be in a traditional SQL Server or Oracle environment, but NoSQL solutions have different challenges, and one of them is hidden errors because of eventual consistency or problems with “schemaless” documents. (It’s only schemaless if you never hired a data architect, and if you didn’t sooner or later you’re fucked. *)

I’m thinking of Wattpad and its problem with user comments attributed to the wrong account. That’s a total NoSQL error that a good OLTP-trained data architect could have said, hold on a sec here… What happens if…. And then all the developers stick their fingers in their ears and sing LahLahLahLahLah because the architect just added 3 months to the delivery date. And nine months later your data is untrustable. There are scores of developers out there who got burned by thinking schemaless means never having to think about data consistency across transactions.

Eventually, your financial data has to be in a transactionally consistent state and stay that way and it can never ever revert to a previously inconsistent state. Or you can’t pay people correctly. So, you know, 6 months seems like a decent guess for how long it would take to roll it out and be certain it works for paying people reliably. The concept isn’t hard. The execution is.

Interesting.

* OMG. I actually made a database joke in a writing blog! More than one, actually. This is very strange.

Note: Regarding NoSQL, it’s a very very fast way of scaling data. Although UC Berkeley had one of the early such databases, Amazon more or less put the concept into widespread use, followed by the original developers at You Tube who had to massively scale MySQL. Those guys needed to ramp up fast and on a scale that traditional transactional database could not then achieve. When I say “documents” in the sense of a NoSQL database, I don’t mean a Word document. I mean a collection of information of related items where Item 1 may not have the same information as Item 2 in the same set of related information. In that sense, there is no “schema” (that is a definition of what information is contained in related data. In a transactional database, all objects of a defined type have the same structure, even where elements of the structure are NULL.)

The NSA, by the way, collects your information in Hadoop, a NoSQL database backed up with some Postgres SQL functionality for the sorts of transactions that MUST be consistent.

This is a laughably high level explanation. It’s way more complicated. I’m a SQL Server DBA and Data Architect, but I’ve done some Mongo DB where we needed to address some shortcomings with our SQL Server applications without spending a fortune. For anyone who cares, Microsoft’s SQL Server 2014 changed the query optimization engine in significant ways — and I suspect it’s a direct response to NoSQL. For example my current employer had ugly queries that were taking 2 minutes (on completely under resourced SQL 2008 servers and for data that SHOULD have been in a datawarehouse but wasn’t, so I’m sorry, but the situation is long and convoluted and no one here cares, just know that 2 minutes for a query result is beyond embarrassing) that went down to 45 seconds when run on a SQL 2014 install.

Basically, the point is that the situation is considerably more complicated than, hey, let’s do it THIS way instead. Amazon is not just a company that sells stuff. They INVENTED the technology they needed to massively scale because no one else was doing that, and then they open sourced it. So when we talk about Amazon having advantages, the advantages are even bigger than most realize. Amazon IS data. I don’t think they do anything without knowing what the data says, and they have more data than anyone.

It’s why we’re seeing such an upheaval in publishing. It’s why Romance matters more and it’s why companies and analysts who dismiss Romance are in big trouble. Amazon knew about Power Readers. The usual gendered biases very likely got exploded by the facts. Traditional publishers need to lose the bias. Companies who want to compete in this space need to fire anyone who talks about REAL books and REAL readers.

The Romance Sharks will eat their lunch.

Share

Comments Are Back

Saturday, January 31st, 2015

In case you were itching to comment, comments are back at the blog.

Share

Notice

Friday, January 30th, 2015

Comments are temporarily disabled. Sorry.

Should be back on soon.

Share

One Size Does Not Fit All – Books Prices in the EU

Thursday, January 1st, 2015

So.

There’s this whole VAT thing with the EU, where blah blah blah. Pricing difficulties blah blah blah. Rock and a Hard Place.

Short Version

I’m very sorry to say that at Nook, I have set all my books to US only. For now, it won’t be possible to buy Nook versions of my books outside the US. I hate that. Hate. It. But Nook has made it impossible to correctly account for VAT and the laws in certain countries that require book prices to be the same everywhere in that country.

Amazon aggressively prices-matches Nook, including Nook in the UK. I know this because a few weeks ago it took Amazon UK all of 3 hours to price match a Nook UK price change to .99 while Amazon US did not match for a couple of days.

Nook Press does three things that make it impossible to comply with the laws.

1. They require US-based authors to provide a price that does NOT include VAT.
2. They allow only one price for the entire EU
3. You can choose US-only OR all three: US + UK + EU.

This means I cannot be in Nook UK, because that option also puts me in the EU.
This means it is not possible to comply with Fixed Price Laws.
It also means that I can’t be at Nook at all with books where my traditional publisher has only North American rights, but that’s been true forever. I’m just complaining is all.

As an aside, it is also impossible to comply with Nook’s expectation that my Nook prices will not be higher than the prices I set at other vendors.

If I keep my books on sale at Nook with the current state of affairs at Nook Press I would be unable to match my prices across the EU vendors AND I would have different prices at Nook.de, Amazon.de, iBooks de, etc when the law requires them to be the same. The same would be true of France. I would get a nasty-gram from Amazon informing me of the price discrepancies and, since I would be unable to address them, Amazon could either price match or remove my book from sale.

The problem of different German prices (or French etc) is not a price matching issue. This is a regulatory issue, and Amazon is the one who will hear from the German authorities about not complying with German law. Amazon might have to take my book off sale in order to continue doing business in Germany.

(I would expect Nook to be hearing from France and Germany about this when/if those authorities notice that Nook prices are out of compliance, which they will be.)

This is not a risk I wish to take. Since my Nook sales are something like 99% US, I suppose my decision affects only a few readers. (Please contact me if you are one of those readers.)

The Longer Explanation

Three of the major vendors for self-publishing authors, Amazon, iBooks, and Google, make it possible to behave like a normal business and set prices in the various EU countries that account for VAT and also price books to end in .99. I can decide whether I will round down to a .99 price or round up to one. They also allow authors to make sure their prices are the same across vendors where there are fixed price laws for books.

Kobo, for those who are interested, expects US users to provide an EU price that INCLUDES VAT. They also only have one price for the EU, but because it includes VAT, you can, effectively, provide the same VAT-inclusive price everywhere and remain in compliance with German and French laws, assuming you (alas) set the German and French prices to the same VAT-inclusive price everywhere else. Not very fair to the French, where VAT is so much lower, but it’s that or nothing.

Because Nook does not include VAT and also only has one price for the entire EU, there is no way to guarantee the price will be the same where it needs to be.

Kind of Snide Aside

I always wondered why Nook is inflexible about how you sell in countries outside the US. I thought it was peculiar that they said “because of the volume” it could take several weeks for a book to appear on the UK or EU sites. Today, the answer finally kicked me in the shins.

The only reason volume would be an issue for populating a website is if they’re doing it mostly by hand. The beauty of a database driven website is that once you have the webpage talking to the database (waving hands and leaving out the bits about horrific SQL queries) there is little difference between putting one record on a page or 1,000,000,000 records. And even if we’re talking about terrible query performance, the time to render even a million records is minutes and in no possible case is it weeks. The only thing that takes weeks in this scenario is the person you’re paying to put the records into excel. Or worse, the person who is entering the data by hand into the servers located in the EU.

Even Longer Explanation

Basically, if you’re selling books, the laws about how to comply with the taxing and pricing authorities in the European Union just got a lot more complicated. For those who are thinking they’ll just wait for the EU tax authorities to come knocking, I will say that you have misunderstood what could happen. If you are selling your books to the EU via Amazon and the like, you are selling to the EU because those vendors have a presence in the EU. If your book at these vendors is priced such that you jeopardize their compliance with EU laws, they will likely have to remove your book from those countries. So, no, Germany will not collect a euro of VAT from you. But your books are likely to be yanked from all the German vendors so, yes, no VAT paid to Germany, but no one in Germany is buying your books.

Slight Aside

If you are selling books from your website and you sell to residents of the EU without remitting the appropriate VAT to their country of residence, then you will have some exposure there. Probably you could get away with it, but that does not make it ethical to do so. I have no idea what the IRS might say during an audit when you have income from the EU and can’t prove you don’t have to pay State tax on it, perhaps, or maybe, (total speculation here) the IRS would say something like, Hmm. The US has a treaty with Germany in which we agree not to screw each other over taxes. I dunno. I think I don’t want to find out.

Back to the Even Longer Explanation

VAT varies across countries in the EU. Further, in some EU countries, books must be the same price at all places in that country. Thus, if you are selling a book in Germany, that book must be the same price everywhere it’s on sale in Germany. For DIY authors, that means if a book is Euro 2.99 at Amazon.de, it must also be 2.99 at the German iBooks, the German Google, the German Nook, the German Kobo, etc. The same is true in France: same price in France across all French venues.

In the EU, the price shown to purchasers includes VAT.

Now, in Germany, VAT is 19%. Thus, if a book is priced at Euro 2.99 in Germany, after the sale is made .48 goes to the German government, leaving the remainder of 2.51 to be split between the vendor and author. As an author, I care about the part of that 2.99 that does not include VAT because that’s the amount used to calculate my royalty.

In France, VAT is 5.5%. Thus, for a book priced at Euro 2.99, in France, after the sale is made .16 goes to the French government leaving the remainder of 2.83 to be split between the vendor and author.

At Nook, where I am providing ONE VAT exclusive price for the entire EU, that price must have the appropriate VAT added to it, and that VAT rate varies. Suppose I say, OK, my book is $2.99 (American). Google-fu says that’s Euro 2.48. A quick test at Nook gave Euro 2.47. Using 2.47:

Add 19% VAT for Germany and the price is 2.94
Add 5.5 VAT for France and the price is 2.61

Those are stupid prices to show consumers, but they are also prices I cannot guarantee will match the VAT inclusive prices I must give at EVERY OTHER VENDOR.

iBooks rounds up or down to .99 prices. I will NEVER be able to match Nook to Apple. Not ever except by total serendipity.

At Kobo, I give a single VAT INCLUSIVE price. So… which one do I pick at Kobo? iBooks Germany 2.99 or Nook Germany 2.94?

I could change the Nook EU price to 2.51 to give me a Nook Germany price of 2.99 and match Apple, Kobo, Amazon, and Google to that.

But then the French price at Nook becomes 2.65, which at Apple will be rounded up to 2.99 and …. boom. Not in compliance with French law. This is true as long as I have books on sale at Nook EU.

And that is why I no longer have books on sale at Nook EU. This is complicated enough as it is. Heck, I’m not even confident yet that I have managed to price everything as required, because I will tell you, iBooks did some crazy ass shit with prices that scares me, and Amazon’s VAT adjustment resulted in two of my US prices being raised. That’s not supposed to happen. But I know it did because a couple months ago I used Amazon’s pricing tool to reset some prices, which I logged so I could keep track, and also conformed at other vendors where Amazon recommended a price decrease (because I didn’t want to gouge others) and today, two of those Amazon books were back to the higher US price and therefore MORE than the price at other vendors.

::sigh::

Share

Announcement

Thursday, April 10th, 2014

I’m temporarily disabling comments on the site while I deal with a comments issue.

Should be resolved soon.

And… it appears to be resolved. Comments are re-enabled.

Share