Cochlear implant mapping

4 05 2010

I’ve had 5 “mapping” sessions since activation. A mapping is a reprogramming of the cochlear implant, to readjust the electrical stimulation limits of the electrodes as each user’s brain adapts to sound and fibrous tissue grows over the internal implant. Mappings are typically carried out once a week for the first 6 weeks, then every few months, then twice annually.

At each mapping I was given increased volume and it was an opportunity to address any concerns with the audiologist. This was followed by a coffee break in the hospital cafe then a speech therapy session. I have one more mapping session this week, then my next one is in June when I have my 3 month check.

It’s been a rollercoaster ride. I’ve started with beeps and pushed so hard that I got a constant whine when I put the implant on. This set me back and I had to slowly build up my tolerance again of high frequency sounds from zero, bit by bit, and have successfully avoided a reocurrence of the whine. I have not yet reached full volume, there is still some way to go, which is kind of scary. I found last week quite difficult as everything seemed too loud and I started feeling stressed, but I hung in there and carried on wearing the cochlear implant until I got used to the increased sound levels.

Increased sound levels can be problematic for cochlear implant users because they are more sensitive to loudness changes. A normal hearing person can hear a wide range of sounds from very soft whispers to loud rock bands; this dynamic range of hearing is about 120dB (normal speech is within the 40-60dB range). However, a cochlear implant processor’s input dynamic range (IDR) or sound window is limited to an electrical current of 20dB, and 120dB of sound needs to be compressed into this. Therefore the cochlear implant user is more sensitive to changes in loudness than a hearing person.

If the IDR is small, sounds outside the IDR need to be omitted or compressed; sounds that are too quiet will be cut off, and sounds that are too loud will be compressed and will sound distorted. The 3 main brands of cochlear implants have different IDRs; Advanced Bionics has 80dB, MedEl 55dB, and Cochlear 45dB but with autoranging. I currently have my IDR set at 60dB.

What actually happens in a mapping session? I replace my processor’s battery with a direct connect input lead to the audie’s computer and put the processor back on my head. (Yeah, this freaked ME out the first time I did this).

The audie’s software will reprogramme my implant’s Harmony external processor.

My cochlear implant has 16 electrodes and when each one is stimulated, I will sense each stimulation as a beep.
The audie will set the Threshold (T) levels [to access soft speech and environmental sounds] and Comfort (M) levels [the amount of electrical current required to perceive a loud signal] for each electrode by judging the most comfortable and the highest stimulation I can tolerate – the most comfortable and loudest beeps I am happy to listen to.

I use the loudness scaling chart to indicate to the audiologist which level each stimulation correlates to, ranging from ‘Just Audible’ to ‘Too Loud’.

Then the audie ensures the M levels are similar in terms of my perception, so that the volume is the same in each electrode – I was able to tolerate very high levels of high frequency sounds this week but she brought these back down, otherwise everything would have sounded weird and unbalanced.

This mapping method is rather tedious and drawn out over several months. Clarujust is new software (currently in FDA trial) from Audigence Inc where the patient and processor are interfaced to a computer and words are played and repeated as heard, the software adjusting the map accordingly. Mapping this way reportedly takes 30 minutes. This software can be used by all hearing aid and cochlear implant companies except Advanced Bionics, however Phonak signed up with Audigence Inc this year prior to the Advanced Bionics/Sonova acquisition.

When a mapping is new, it tends to sound louder, until I get used to it. It takes 3 days to get used to a new mapping, then I find loud sounds have become softer and more tolerable, and I can hear a wider range of soft sounds. It is uncomfortable turning up the volume of life to the max every few days. I still have to brace myself for the jolt first thing in the morning, making the transition from complete silence to a full technicolour river of loud sounds pouring into my brain. Amanda’s Tip of the Day: If you wake with a hangover, take your time to put on your CI and turn down the volume. It helps. A little.

It’s an amazing learning process as I am also trying to identify sounds as well, discovering amazing new ones, and learning to discriminate between things that sound similar to me. My hearing is like a baby, it needs time to learn and grow, but it can be fun too.

Erber’s model set forth 4 levels of auditory skill development;

    Awareness of sound (presence v absence)
    Discrimination (same v different)
    Recognition (associating sound/word with meaning)
    Comprehension (using listening for understanding)

I have now reached the second level, I am hearing things but finding it difficult to discriminate between some sounds. Obviously, this means I am still lipreading. In my speech therapy session this week, I discovered I can’t distinguish between Gear v. Dear, tIme v. tAme. I can listen and fill in a word missing in a written sentence, but listening to a whole sentence and being given one written word is more difficult. With hearing aids, both tasks would have been impossible.

In addition to mapping, my progress is occasionally evaluated with an audiogram and speech perception performance with the cochlear implant in a soundproof booth. These tests assist the mapping process and indicate any further adjustments required. I expect I’ll have this done this week, and hope to have improved upon the 18% I achieved in my last speech perception test.

I was programmed with ClearVoice last week but am still adjusting to my new mapping, so I have just been ‘tasting’ this wonderful addition. I tried it on the train; the roar of the train engine and clashing sounds (brakes or pressure pads? – haven’t worked this sound out yet) dropped away significantly and I could clearly hear voices around me. It was awesome. Yesterday, I was sitting by a window and became conscious of this sound. I realised it was the rain spitting outside. In the garage, I could hear the drumming of the rain on the roof and the traffic outside. With ClearVoice on, the traffic became very quiet and the rain became a very clear high PLINK PLINK PLINK, and a lower PLONK PLONK when it came through a hole in the roof and landed on an object. Again, awesome!

Try out the ClearVoice demo for yourself. Don’t forget to say the mandatory WOW!

Shanti is waiting for her cochlear implant operation date and works as a personal trainer and complementary therapist. She gave me a super aromatherapy massage yesterday and I left feeling very relaxed. As soon as I left, I plugged into my iPod and was amazed to hear that the tinny / Donald Duck tone of vocals had gone from a lot of songs. Perhaps there is a link between relaxation and better hearing. Today, voices sound largely normal and it’s so so so NICE to have some normality again!

Photos courtesy of Byron





The unbearable loudness of hearing

25 04 2010

It has been one month since activation and my world has changed beyond recognition and exploded into a kaleidoscope of sounds. Some are old sounds which sound different, some are completely new. The world sounds different through a cochlear implant and it is starting to sound much better.

Each time I have a mapping, my bionic hearing is adjusted – at the moment we are still focusing on increasing the volume. For the last week I have been listening in awe to the (surprisingly noisy) birds, the crackle and pop of rice krispies, my office clock ticking, the ssss of my perfume atomizer, the jangle of keys and my dog’s clinking collar tag, and all the little sounds my dog makes when he wants something! I am discovering that some things, heretofore silent to me, actually do make a sound. The photocopier touchpad beeps, the door of the room next door squeaks (and now annoys me immensely), my hands rasp when I rub them together and so does moisturiser when rubbed on my skin, running my fingers up my arm makes a soft rasping sound too.

I have been utterly shocked by the cacophonous ssshh of brakes and beeps of doors on public transport, the roar of traffic, people in the street, the sharp crackle of plastic bags and paper, the clatter of crockery, the flushing toilet, the microwave nuking food, and the kill-me-now roar of aeroplanes (unfortunately, I live near Heathrow). Last Saturday was the first day in my life that I was able to hear all the birds so I sat in the garden, in the sunshine, and listened. This also happened to be the first day of the airline stoppages due to the Icelandic volcano eruption and the skies were silent. I only realised how much noise airplanes made this week when the airports re-opened for business. Over the last three days, I have become quite overwhelmed by the loudness of some sounds, now that my implant’s volume is nearing an optimum level.

I went to a social event a few days ago and although noisy, I was able to pick out peoples’ voices more easily which made lipreading easier. I heard this strange sound behind me and turned around to see a woman playing a harp. It sounded totally different from what I expected, like a soft guitar.

The strange thing is that high frequency sounds seem much louder to me than other sounds. A person with a hearing loss cannot screen or ‘filter’ out sounds in the way that hearing people do, so everything seems loud. This is why noisy places are so problematic, as hearing aids and cochlear implants amplify all sounds so that environmental sounds are as loud as voices, and the hearing impaired person is unable to filter out the background noise (the cocktail party effect). Now that the high frequency sounds are so new to my brain, these seem extra loud to me, my brain is going WOW What’s This?, sitting up and taking notice, and is only now listening to low frequency sounds again. The world is starting to sound more normal. Voices still sound tinny so it’s a struggle to understand speech.

I can now hear the dial tone on the phone. I started off by listening to phone sounds (these work on both pc and Mac) and will next try listening to a script I’ll give to a friend.

There are four levels of auditory skill development according to Erber’s model – awareness of sound (presence v absence), discrimination (same v different), recognition (associating sound/word with meaning) and finally, comprehension (using listening for understanding). As I was born deaf and have been deaf for 40 years, I’m going to struggle harder and for longer to climb up this ladder.

It is a common misconception that we hear with our ears. In fact, the auditory system, from the outer ear to the auditory nerve, merely provides the pathway for sound to travel to the brain. It is in the brain that we hear. If a person developed hearing loss after learning language (postlingual hearing loss), the brain will retain an “imprint” of the sounds of spoken language. The longer a person is deaf, the more challenging it is to recall these sounds. In the case of a person who has never heard (hearing loss at birth), or who has had very limited benefit from hearing aids, sound through a cochlear implant will be almost entirely new. That person will need to develop new auditory pathways, along with the memory skills to retain these new sounds. Whatever a person’s history, rehabilitation can be very useful in optimizing experience with a cochlear implant.

Being able to detect sound, even at quiet levels, does not mean that an individual will be able to understand words. Norman Erber developed a simple hierarchy of listening skills that begins with simple detection: being aware that a sound exists. An audiogram indicates detection thresholds. Although thresholds with a cochlear implant may indicate hearing at 20 dB to 40 dB (the range of a mild hearing loss), the ability to understand words can vary greatly. The next level of auditory skill is that of discrimination; that is, being able to determine if two sounds are the same or different. For example, one may detect the vowels oo and ee but not be able to hear a difference between the two sounds. The third level of auditory skill is identification. At this level, one is able to associate a label with the sound or word that is heard. Key words may be clear, such as cloudy or rainy, within the context of listening to a weather report. Erber’s final level of auditory skill is that of comprehension. Words and phrases provide meaningful information rather than just an auditory exercise. At the level of comprehension, a person is able to follow directions, answer questions, and engage in conversation through listening.

(Source: Advanced Bionics)

I’m still at the stage of detecting sounds and trying to move into the next stage of discriminating between sounds.  Two weeks ago, I was unable to tell the difference between PAT and BAT, TIN and DIN, FAN and VAN. With the practice I have done, I am now able to do this with almost no errors. I am now working on listening for the difference between MACE and MICE, and DEAR and GEAR – which is difficult as D and G sound so similar. I don’t know what to listen for so am hoping the brain kicks in at some point!

My speech perception is improving slowly. I have tried to make discrimination practice fun, by listening to Amanda on Skype. She will give me a colour, or a month, or a day of the week, or a number between 1-10. Maybe next I will try tube stations or football teams, whatever I think I can cope with, to keep it fun. We decide which closed set we will do – using Mac to Mac, the in-built sound (and video, for lipreading) quality is very good, aided by my use of a direct-connect lead to my processor. I am trying to work towards ‘open sets’ – unknown sentences – by asking people to put a hand over their mouth and give me a sentence. Patrice gave me my first sentence this week : “Bob and Kirby are waiting for me in the car park” and I got it correct except for the word “car”. She gave me a second sentence and I got that spot on. With practice, I will improve. We tried a discrimination exercise – I am now able to hear the roadworks behind the office – they had been working there for a year and I had missed it all (lucky me). So when they hammered, drilled, or dug with a spade, Patrice told me and I listened for the different sounds.

Music is improving too. I am finding that rock with vocals louder than music wins hands down. Opera sounds good, piano/flute/guitar sound quite good. There are musical resources specifically for CI users. Advanced Bionics offer Musical Atmospheres (free for AB users) and available online or on CD, where new music is discovered through 3 hours of recorded musical examples, each containing increasing levels of complexity in musical appreciation, helping to establish a firm foundation for musical memory. They also offer A Musical Journey, through the Rainforest and Music Time for children. Med El offer Noise Carriers, a musical composition available on CD from hearf@aol.com – see Listen,Hear! newsletter no.20 for further information. Cochlear don’t seem to have any resources but they do offer tips.

I am finding that I am feeling soooo much better than I did with hearing aids. I used to have headaches almost every day, I was always exhausted from the constant effort of lipreading, reading the palantype (CART), concentrating to make sure I didn’t miss anything, and stressed by the thought of any social event.  Now, I’m not exhausted every evening, I’ve had one headache since activation, lipreading is somehow easier as I’m getting more audio input even though people still sound like Donald Duck, and I feel much more relaxed overall, and more positive about communicating with ducks people.

I’ve finally discovered the noisy world that everyone lives in. This noisy world should get a bit quieter this week when I get ClearVoice, which will automatically reduce background sounds so I can concentrate on voices. It’s almost a magic wand for hearing loss. All I will then need is to be able to comprehend speech, and I’ll do a convincing fake of a Hearing Person.

I’ve lost the clouds but I’m gaining the sky. And the sun will shine. You’ll find me out there, in the Hearing World, shining brightly with happiness. And as the video below nicely demonstrates, I want to kick your butt!