I’ve had some really good learning experiences with developing my first Alexa Skill: It is almost ready to be submitted for approval.
Deconstructing the sample code and then adding in my own requirements was not as painful as I first thought. What I did find was that to help me with the development process the use of multiple displays significantly improved my experience. Coding would appear on one screen; Amazon Web Service showing the Lamda function code on another screen and Alex Skills Kit with the code for the JASON Interaction model on another. This made it easy to quickly change some code, run it, and see the outcome.
A fourth unconnected screen was another laptop for creating content to form the various sayings or question requests (called Utterances) and the information that would be returned from such a request (called Slots). It was the slots which took up a lot of time to create as this is the basis of the Skill.
It all seems to be functioning correctly, except for a few technical terms which will need a bit of work regards phonetics in the synonyms option of the interaction model. I’ve tried some and this does seem to do the trick.
The few acronyms for the main terms aren’t successful, so I’ll have to revisit those. Mixing acronyms and proper words doesn’t seem to work.
Within the response text an acronym is spelt out so long as full stops are put after each letter, so that’s fine. A few acronyms are understood by Alexa without the need for full stops, e.g. pH, UK, US.
Subspecies which can be shortened to ssp, has to be presented as s.s.p.; but as this is voice response and the user doesn’t see the text then it doesn’t matter. This, I found, was a key point to remember when completing text for the response. If the text had to appear in say a hard copy dictionary then some of it would look odd. For example, the chemical symbol for Iron is Fe. If you put that in the response then Alexa will say Fee; so by stating it as F.e. she will respond correctly. This doesn’t matter for voice design.
So the crux of the matter is to thing how a voice response will be generated and trial it on the different systems. Having multiple screens certainly helped to speed up development time.
I had to make sure that the response was logical and fitted with the joining word of ‘means’, for example:
Alexa responds by saying “Denitrication” means “Unsuitable soil conditions ,,, etc”. The third example (for Dew Point) above would not flow correctly, so the response was changed to start with just “The temperature … etc.”. I currently have around 500 terms and expressions and these will form the first part of the Alexa Skills submission, with updates thereafter.
I also found that having a hyphen in the key term which is asked about doesn’t always go too well with Alexa, so I’ve taken them out as well.
She seems to pronounce scientific names quite well, for example for leatherjacket Tipula paludosa, comes across very well.
The letter T might also pose an issue in some cases – Aeration if said like aerashion gets well recognised, but with an emphasis on the T (only a nuance of a difference mind), I’ve found she has difficulty picking this up – so a synonym helps sort this out.
Coding on my laptop is done in either Atom or Notepad++ which are both very good open source packages.
I haven’t quite worked out how I would use a ‘Card’ for my Skill, but as it has been designed for a non-screen Alexa, unlike say the Echo Spot or Echo Show, I’m fine with that for the time being.
Anyhow, just need to do some more trialling then I’m ready to submit to Amazon to see if it will be approved, or not!
Chris Gray, 18th February 2018