Improving pitch discrimination

28 02 2012

I’ve been taking part in a clinical study over the last few months on factors affecting audio-perception in patients with cochlear implants.

This study was conducted to determine if cochlear implant sound processors can be adapted to improve speech perception.  My processor program on my older processor (now two years old!) was changed to improve pitch discrimination, based on my discrimination abilities during testing, and evaluated with speech perception tests.

During initial testing, the Hearing In Noise Test (HINT) was used. Two lists of ten common, simple sentences (such as “The weather looks good today”) were used in quiet and noise with sentences administered at +10 signal to noise ratio, to give a baseline level of my ability to discriminate.

During the first session I undertook a pitch discrimination task. Two sounds (beeps) were played, and I was asked to say which sound is higher in pitch. Each sound is a separate electrode on my implant being stimulated, and this was continued for all electrodes to work out which ones give the clearest pitch, and if there are electrodes which sound the same.  This went on for 2-3 hours … uggg! I had a new program added to my sound processor to try out, based on the pitch task, and I used this all the time. This meant I had about 6 electrodes switched off and a simpler map.

During session 2, one month later, I underwent the same speech perception tests with the new program and then was given a different program to try based on the results of another pitch discrimination task.

During session 3, one month later, I underwent the same speech perception tests with the latest program and then asked which program I preferred out of the two new programs and the original one that I started with.  I couldn’t tell that there were any major differences between them, they were slightly different in the quality of the sound but I could have lived with any of them. However, the speech discrimination tests told a different story …

HINT testing in quiet :

Bilateral % Left ear only (older CI) %
Nov 2011 57 Dec 2011 48
Dec 2011 84 Feb 2012 70

Out of 26 in this clinical trial group, 4 saw an increase in their speech perception scores. It is likely that a simpler map allowed my brain to ‘sit back’ for a while, take the time to absorb sounds through a simpler map, then start again refreshed. 🙂

Now, it’s onwards and upwards with more auditory verbal therapy, as I’ve purchased a course of AVT from Pindrop Hearing on Harley Street. This course is ideal for hearing aid users but not quite as effective for cochlear implant users. This new version of LACE training has British accents but the regular testing is done with US accents, as the comparative data is pulled from the US database of other LACE users.

I’m willing to try anything to increase my speech perception scores, so watch this space. I wonder if I will ever be able to hit 100%?

Cochlear implant mapping

4 05 2010

I’ve had 5 “mapping” sessions since activation. A mapping is a reprogramming of the cochlear implant, to readjust the electrical stimulation limits of the electrodes as each user’s brain adapts to sound and fibrous tissue grows over the internal implant. Mappings are typically carried out once a week for the first 6 weeks, then every few months, then twice annually.

At each mapping I was given increased volume and it was an opportunity to address any concerns with the audiologist. This was followed by a coffee break in the hospital cafe then a speech therapy session. I have one more mapping session this week, then my next one is in June when I have my 3 month check.

It’s been a rollercoaster ride. I’ve started with beeps and pushed so hard that I got a constant whine when I put the implant on. This set me back and I had to slowly build up my tolerance again of high frequency sounds from zero, bit by bit, and have successfully avoided a reocurrence of the whine. I have not yet reached full volume, there is still some way to go, which is kind of scary. I found last week quite difficult as everything seemed too loud and I started feeling stressed, but I hung in there and carried on wearing the cochlear implant until I got used to the increased sound levels.

Increased sound levels can be problematic for cochlear implant users because they are more sensitive to loudness changes. A normal hearing person can hear a wide range of sounds from very soft whispers to loud rock bands; this dynamic range of hearing is about 120dB (normal speech is within the 40-60dB range). However, a cochlear implant processor’s input dynamic range (IDR) or sound window is limited to an electrical current of 20dB, and 120dB of sound needs to be compressed into this. Therefore the cochlear implant user is more sensitive to changes in loudness than a hearing person.

If the IDR is small, sounds outside the IDR need to be omitted or compressed; sounds that are too quiet will be cut off, and sounds that are too loud will be compressed and will sound distorted. The 3 main brands of cochlear implants have different IDRs; Advanced Bionics has 80dB, MedEl 55dB, and Cochlear 45dB but with autoranging. I currently have my IDR set at 60dB.

What actually happens in a mapping session? I replace my processor’s battery with a direct connect input lead to the audie’s computer and put the processor back on my head. (Yeah, this freaked ME out the first time I did this).

The audie’s software will reprogramme my implant’s Harmony external processor.

My cochlear implant has 16 electrodes and when each one is stimulated, I will sense each stimulation as a beep.
The audie will set the Threshold (T) levels [to access soft speech and environmental sounds] and Comfort (M) levels [the amount of electrical current required to perceive a loud signal] for each electrode by judging the most comfortable and the highest stimulation I can tolerate – the most comfortable and loudest beeps I am happy to listen to.

I use the loudness scaling chart to indicate to the audiologist which level each stimulation correlates to, ranging from ‘Just Audible’ to ‘Too Loud’.

Then the audie ensures the M levels are similar in terms of my perception, so that the volume is the same in each electrode – I was able to tolerate very high levels of high frequency sounds this week but she brought these back down, otherwise everything would have sounded weird and unbalanced.

This mapping method is rather tedious and drawn out over several months. Clarujust is new software (currently in FDA trial) from Audigence Inc where the patient and processor are interfaced to a computer and words are played and repeated as heard, the software adjusting the map accordingly. Mapping this way reportedly takes 30 minutes. This software can be used by all hearing aid and cochlear implant companies except Advanced Bionics, however Phonak signed up with Audigence Inc this year prior to the Advanced Bionics/Sonova acquisition.

When a mapping is new, it tends to sound louder, until I get used to it. It takes 3 days to get used to a new mapping, then I find loud sounds have become softer and more tolerable, and I can hear a wider range of soft sounds. It is uncomfortable turning up the volume of life to the max every few days. I still have to brace myself for the jolt first thing in the morning, making the transition from complete silence to a full technicolour river of loud sounds pouring into my brain. Amanda’s Tip of the Day: If you wake with a hangover, take your time to put on your CI and turn down the volume. It helps. A little.

It’s an amazing learning process as I am also trying to identify sounds as well, discovering amazing new ones, and learning to discriminate between things that sound similar to me. My hearing is like a baby, it needs time to learn and grow, but it can be fun too.

Erber’s model set forth 4 levels of auditory skill development;

    Awareness of sound (presence v absence)
    Discrimination (same v different)
    Recognition (associating sound/word with meaning)
    Comprehension (using listening for understanding)

I have now reached the second level, I am hearing things but finding it difficult to discriminate between some sounds. Obviously, this means I am still lipreading. In my speech therapy session this week, I discovered I can’t distinguish between Gear v. Dear, tIme v. tAme. I can listen and fill in a word missing in a written sentence, but listening to a whole sentence and being given one written word is more difficult. With hearing aids, both tasks would have been impossible.

In addition to mapping, my progress is occasionally evaluated with an audiogram and speech perception performance with the cochlear implant in a soundproof booth. These tests assist the mapping process and indicate any further adjustments required. I expect I’ll have this done this week, and hope to have improved upon the 18% I achieved in my last speech perception test.

I was programmed with ClearVoice last week but am still adjusting to my new mapping, so I have just been ‘tasting’ this wonderful addition. I tried it on the train; the roar of the train engine and clashing sounds (brakes or pressure pads? – haven’t worked this sound out yet) dropped away significantly and I could clearly hear voices around me. It was awesome. Yesterday, I was sitting by a window and became conscious of this sound. I realised it was the rain spitting outside. In the garage, I could hear the drumming of the rain on the roof and the traffic outside. With ClearVoice on, the traffic became very quiet and the rain became a very clear high PLINK PLINK PLINK, and a lower PLONK PLONK when it came through a hole in the roof and landed on an object. Again, awesome!

Try out the ClearVoice demo for yourself. Don’t forget to say the mandatory WOW!

Shanti is waiting for her cochlear implant operation date and works as a personal trainer and complementary therapist. She gave me a super aromatherapy massage yesterday and I left feeling very relaxed. As soon as I left, I plugged into my iPod and was amazed to hear that the tinny / Donald Duck tone of vocals had gone from a lot of songs. Perhaps there is a link between relaxation and better hearing. Today, voices sound largely normal and it’s so so so NICE to have some normality again!

Photos courtesy of Byron

Spectacles, testicles, wallet and watch

24 03 2010

In the name of the Father, Son and Holy Spirit. And Audie said, Let There Be Sound.

It’s switch-on day for my new bionic ear. I can’t say I’ve been looking forward to it as I didn’t know what to expect. You have to roll with the curveball that you’ve been thrown. Everyone is different and so there are a wide range of experiences from hearing nothing to understanding the audie right out of the gate. I was hoping to hear sounds similar to my hearing aid.  Being born deaf, that was probably unrealistic. I was feeling very very nervous as I stepped into the audie’s office.

The cochlear implant processor was plugged into the computer and put on my head. Eilene, my audie, first set the impedances – measures of the electrical resistance between the individual implant electrodes. Then she set the sound levels. I listened to 4 beeps, very much like a hearing test. I had to tell her when each beep was too soft, comfortable, or too loud. This set all 16 electrodes as each beep set 4 electrodes. Now I was set to go!

They unhooked me from the computer and Eilene tested me out with some sounds. I could hear her voice, but it was beep-beep-beep-beep. Clapping? Bip-bip-bip-bip. She rustled papers. Bip-bippity-bip-beep.

I am a bionic girl in a morse code world.

Wow. This is different!

I was so shocked at my switch-on that Smudge, my hearing dog, picked up on this and freaked out. He jumped up and down, looked at me as if to say ‘It’s okay mom’ and wouldn’t leave me be. Awwww bless! I was unable to hear Eilene rattle her keys. She asked me to put the plug in her office sink, run the tap, and unplug. Listen for the plug being put in, water running, water draining out, plug being taken out. I couldn’t hear a bloomin’ thing. Smudge then ran over to the sink and put his paws up on the rim – he obviously wanted a drink, so we put some water in a bowl for him. Beep-beep. Beep-beep. Beep-beep. That was the sound of him drinking water! It was beeps, but it was sound. Whooooo!

I went for a quick coffee break in a silent cafe, there were 2 small children with CIs playing computer games. I could hear them beeping like happy little R2D2s. Then I went to see Liz, my speech therapist. Liz took me through some speech sounds, I can tell when she is saying AAAAAAAH with her voice and without. I found it hard to tell the difference between AMA (beepbeep) and APA (beep – pause – beep) but with practice I will get better at this. I can tell the difference when she says TEA (beep), COFFEE (beepbeep) or HOT CHOCOLATE (beepbeepbeep).

These beeps will change over time into meaningful sounds as my brain learns and adapts. It’s like being a baby, I have to learn to hear all over again. Although I am disappointed I am hearing beeps, at least I am not hearing nothing, and I am not getting a sensation of being electrocuted rather than hearing sound – common switch-on reactions for those who were born deaf.  I am able to turn up the volume if I need more before my next mapping in 5 days time. The strange thing is, when I go outside, I hear nothing – this is something to do with the acoustics. I cannot hear my male colleagues but my female colleagues are beeping away. I really AM starting from scratch again. I can now understand why this process is scary for some, especially for those who have had good hearing before they lost it.

(Captions courtesy of Howard Samuels and Bill Cresswell)

I was given my box of tricks to take home, filled with coloured covers to jazz up the implant, a dry-box, batteries, battery charger, car charger, different leads and ear hooks. And a health and safety manual. It’s very important not to plug a cochlear implant into the computer / laptop whilst the computer is plugged into the mains electricity supply. I’m also advised not to crawl under electric fences. 🙂

I’ve been wearing the implant for 10 hours now. Let’s see ….. what’s beeping?

Keyboard keys

Moving the keyboard on the desk **

Putting objects on the desk, e.g. mobile phone, glass, pen, oh heck – anything! **

Paper rustling **

Sighing (me)

Sniffing (me)  **

Phone dial tone


Light switch

Kettle boiling and switching off **

The fire doors at work closing **

Clothes landing in a pile **

Drop a pen on the carpet **

Hear someone talk (i.e. beep) in another room **

When I give Smudge a kiss **

Plosives in speech (T, P, B, K, G, J, D, Q) **

All the sounds with stars ** are sounds I cannot hear with my hearing aids. So there you go. Progress. Even if it’s in baby steps.

Beep beep!

Lipreading mobile phone

18 03 2010

A prototype lip reading mobile phone promises to end noisy phone calls. The technology allows people to have silent phone conversations by measuring the tiny electrical signals produced by muscles used when someone speaks.

The phone can record these pulses even when a person does not audibly utter any words and use them to generate synthesised speech in another handset.

The device was on show at the Cebit electronics fair (2-6 March) in Germany. It relies on a technique called electromyography which detects the electrical signals from muscles. It is commonly used to diagnose certain diseases, including those that involve nerve damage.

The prototype that was on display in Germany uses nine electrodes that are stuck to a user’s face. These capture the electrical potentials that result from moving the articulatory muscles used to produce speech. The electrical pulses are then passed to a device which records and amplifies them before transmitting the signal via Bluetooth to a laptop. There, software translates the signals into text, which can then be spoken by a synthesiser. In the future, the technology could be packed in a mobile phone for instantaneous communication. It could also form the basis of an instant translation system.

It is not the first time that electromyography has been explored for silent communication. The US space agency Nasa has investigated the technique for communicating in noisy environments such as the Space Station. It has also used the technique to explore advanced flight control systems that do away with joysticks and other interfaces. Nasa explored the technique to understand simple commands. The difference with this prototype  is that continually spoken sentences can be recorded and recognised.

Source BBC News

Exciting, eh? More electrodes for me! Although how Text Relay would cope with this, poses an interesting question …

Let’s get borgerised!

2 08 2009

I’ve never been keen on the idea of joining the cyborg crowd. A friend told me of her experience and this got me considering the idea more seriously. Martine was about the same age as me and, like me, had always been deaf. She received her cochlear implant a few years ago in Belfast. Six months after the operation, she decided to try her hearing aid in her other ear to compare it with the CI. She was stunned by the huge difference in quality of sound and binned the hearing aid. What you don’t have, you don’t miss! Last month I asked my doctor to refer me for a CI assessment. I thought there’s no harm in finding out more, perhaps they will tell me I’m not deaf enough for a CI or that it will be a benefit to me.

My first hospital appointment was with the speech therapist Liz. She told me about the procedure and of the risks involved. This CI centre has put 600 people through the procedure over the last 25 years, of whom 50 are non-traditional CI users, i.e. those who have always been deaf and have no or few neural pathways to make sense of new sounds. The operation is a few hours long and the patient goes home the next day and needs to take 2 weeks off work. Switch-on is 6 weeks later. Rehabilitation, making sense of new sounds, takes a few weeks for most patients. For non-traditional patients like me, rehabilitation takes up to 2 years. The benefits also are not as great for non-traditional users. Those who have previously had hearing, can use the phone and hear without lip reading. Non-traditional users won’t be able to use the phone and will still need to lip read, however a lot of the strain is taken out of lip reading. There are some risks attached to this procedure, as in any operation. Patients need vaccinating against meningitis before the operation, as the skull is opened up and exposed, posing a greater risk of catching meningitis. There is a risk of facial paralysis, but Liz was very confident this wouldn’t happen, as the computer beeps when the surgeon gets too near the facial nerve and he just moves away. There is a risk that the CI won’t work, but it can be replaced. The procedure is irreversible, I wouldn’t be able to go back to wearing a hearing aid as when the CI’s electrodes are implanted, they are inserted right into the auditory nerve in the cochlea and destroy all the hearing in that ear.

Liz then gave me a CI to look at. It is huge! It’s much bigger than my hearing aid. And it’s ugly.

Source: Wikimedia Commons

The part that sits behind the ear, and unfortunately looks just like a hearing aid (why oh why can’t designers come up with something cool to wear?), houses the microphone, the speech processor (which can be updated), and the battery. The round bit that sits on the head is the magnet. This connects via an electromagnetic field to a magnet inside the skull linking to the implanted electrodes. Up to 24 electrodes (or channels) are implanted to give enough perception for the speech frequencies, compare this to the 15,000 hair cells in a normally hearing ear. Sound won’t be heard as a hearing person would. A few of my friends have had CIs recently and the first thing they said was ‘Everyone sounds like The Clangers!’.

The House Ear Institute has demos of what a deaf person hears through a cochlear implant. I’d love to hear your thoughts.

My next appointment was with the audiologist. I was told that some people are disappointed with their CI because they had very high expectations. A CI does not replace hearing, it is just a very powerful hearing aid and works in a different way to a traditional hearing aid which utilises your residual hearing. It’s not a magic cure, and you can only work with what you’ve got. I was told that I’m certainly within the hearing loss range for a CI but that I’ve done extremely well with hearing aids, so that could work against me. So can’t this effort with hearing aids be transferable? Hearing with a CI will mean that everything will be very loud to a person who hasn’t heard for a long time, like coming out of a week in a dark room into bright sunlight. It hurts and it’s overwhelming. I was given the standard hearing tests and finally a word test. The computer spoke about 12 sentences and I had to listen and repeat. No lip reading! The audiologist said I am likely to score zero and not to worry. I said I’m going to guess, if that’s ok with you. The look on her face was priceless when I scored 52%. Um. They should have used single words, and not sentences! Then I would’ve scored zero, with no grammatical clues to work out what’s being said.

Last week I went to see the consultant surgeon. Just prior to that appointment, I found an audiogram at home, dated 1997. It showed quite a different hearing loss. I hadn’t realised that I had lost so much hearing since then, I’ve lost 30 decibels. I took this audiogram with me and gave it to the surgeon (some dumbass hospital somewhere had lost all my medical records). His opinion was that I am certain to need a cochlear implant in ten years time as my hearing is deteriorating so much. I’ve barely noticed, but then I don’t listen, I don’t use the phone, I look and lip read all the time. So the question has now become one of when, not if. I don’t really have much choice. It’s going to be total deafness or a cochlear implant – forget the hearing aid.

This has suddenly become a lot more scary.

I’m very curious about the development of a new and superior cochlear implant which was reported in 2006. Maybe I should wait?

More updates soon. The next step is a MRI scan on Friday. Some shut-eye time!