Improving pitch discrimination

28 02 2012

I’ve been taking part in a clinical study over the last few months on factors affecting audio-perception in patients with cochlear implants.

This study was conducted to determine if cochlear implant sound processors can be adapted to improve speech perception.  My processor program on my older processor (now two years old!) was changed to improve pitch discrimination, based on my discrimination abilities during testing, and evaluated with speech perception tests.

During initial testing, the Hearing In Noise Test (HINT) was used. Two lists of ten common, simple sentences (such as “The weather looks good today”) were used in quiet and noise with sentences administered at +10 signal to noise ratio, to give a baseline level of my ability to discriminate.

During the first session I undertook a pitch discrimination task. Two sounds (beeps) were played, and I was asked to say which sound is higher in pitch. Each sound is a separate electrode on my implant being stimulated, and this was continued for all electrodes to work out which ones give the clearest pitch, and if there are electrodes which sound the same.  This went on for 2-3 hours … uggg! I had a new program added to my sound processor to try out, based on the pitch task, and I used this all the time. This meant I had about 6 electrodes switched off and a simpler map.

During session 2, one month later, I underwent the same speech perception tests with the new program and then was given a different program to try based on the results of another pitch discrimination task.

During session 3, one month later, I underwent the same speech perception tests with the latest program and then asked which program I preferred out of the two new programs and the original one that I started with.  I couldn’t tell that there were any major differences between them, they were slightly different in the quality of the sound but I could have lived with any of them. However, the speech discrimination tests told a different story …

HINT testing in quiet :

Bilateral % Left ear only (older CI) %
Nov 2011 57 Dec 2011 48
Dec 2011 84 Feb 2012 70

Out of 26 in this clinical trial group, 4 saw an increase in their speech perception scores. It is likely that a simpler map allowed my brain to ‘sit back’ for a while, take the time to absorb sounds through a simpler map, then start again refreshed. 🙂

Now, it’s onwards and upwards with more auditory verbal therapy, as I’ve purchased a course of AVT from Pindrop Hearing on Harley Street. This course is ideal for hearing aid users but not quite as effective for cochlear implant users. This new version of LACE training has British accents but the regular testing is done with US accents, as the comparative data is pulled from the US database of other LACE users.

I’m willing to try anything to increase my speech perception scores, so watch this space. I wonder if I will ever be able to hit 100%?





Brain stretch

12 07 2011

I’ve had my shoulders around my ears for most of the past year and so I went back to the hospital to see what they could do.  I was finding some noises were still quite loud and uncomfortable – such as rustling paper, people coughing, doors slamming, aeroplanes, people zipping up bags, crockery being put away in the room next door was just too sharp …. and I didn’t have a lot of clarity with speech either. I  wanted to ask my audiologist to turn down the volume, as my nerves were just shredded and I didn’t want to put up with much more of this. I was just fed up with the overstimulation. Is this normal? Is this what the world is supposed to be like? So noisy and ‘soundy’? I’d given myself 15 months to settle in and get used to the barrage of sounds, but on some days it felt overwhelming. Just too much. I didn’t know what was normal. What I should get used to. What is too loud and what isn’t. I felt as if I was drowning in a sea of sound. Whatevs….. I’d had enough.

When I undergo a mapping, I am able to indicate to the audiologist when I have reached a level of sound that feels comfortable to me, but above that, everything sounds the same. So the audiologist had been setting the levels higher and higher over the last year as I seemed fine with them. I just couldn’t tell the difference between loud and louder. I don’t really have any knowledge of sound and therefore I’m not very good at giving my audiologist information on my threshold and comfort levels. My brain was totally confused.

My audiologist decided to try something different, an objective test called a NRI (Neural Response Imaging), to program the speech processor. I didn’t have to do anything. I was connected to the computer through a PSP, a more powerful type of processor, and sat reading as the computer beeped its way through the sound levels and measured the responses directly from the auditory nerve. Electrical activity from the nerve was recorded using the electrodes inside the implant and displayed on the computer (see photo).

Photobucket

Technical explanation of NRI test. Scroll down to figure 4 after the jump.

Non-technical explanation of NRI test.

The results showed my most comfortable level setting should be much lower. My implant was reprogrammed to the new settings, so I went from about 400 to 200 on the scale, across the board. This was a dramatic change – my volume was reduced by half!

I am still on the loud side with my new map, but it all seems so quiet – as if I’m in a dream. I was struggling to understand people (with lipreading). The balance of sounds was changed as well, so everything sounded odd and rather skewed. I was not happy at all. I tried switching to my old map after a few hours and it seemed horrifically loud.

We did a hearing test on the new map and that was -20db across the board, then we did another hearing test on my old map with the volume reduced, but it turned out I could hear nothing at 250, 4000 and 8000 mhz, so we scrapped the old map. I was given a second map, the same as the new quiet one, but a bit louder. I kept a programming slot on my processor for music, set to my new and quietest map.

We tried the Ling Six sounds after the mapping adjustment, I could hear them but I got completely muddled with these, which was quite upsetting, considering I had managed a Skype conversation the previous week. The Ling Six sounds are those sounds that lie within the speech spectrum of hearing; m, oo, ah, ee, sh, s.  On the positive side, voices did sound clearer, and sounds didn’t hurt any more. My audiologist said my current map is still on the loud side and we would be looking to turn it down even further.

A friend said to me, ‘If it doesn’t bother a hearing person, it shouldn’t bother you either’ – food for thought. I was not noticing sounds that bothered me previously and was actually struggling to hear them.

I also learned something handy. I had complained about the hiss when listening through the direct connect lead to my iPod. It turned out the correct way to use it is to connect all the equipment up first, THEN switch the implant on by inserting the battery. The implant then searches for the iPod or whatever the lead is plugged into, and there is no background hiss.

A few weeks on, there has been noticeable progress. I am able to hear all the quiet sounds as before – my office clock ticking, the photocopier spitting out paper in the room next door, and a recent weird one, the optometrist’s breathing as she checked my eyes. I am able to understand some speech again and when a story was read to me, I found it noticeably easier to follow. Last week, I had the oddest experience. I was in a meeting with a barrister; she spoke very clearly, not too fast, in short sentences (the stuff of dreams!). Usually I write notes, keep glancing at the CART/STTR screen, and lipread, to keep up. It’s pretty hard work. On a number of occasions I was able to *hear* what she said without lipreading, and quickly check the CART screen afterwards, rather than be a slave to it. It was such a lovely feeling. I experienced it again that afternoon with another client who was a bit loud, it was just like having captions shoved straight into my brain. No brain processing…. no panicking …. so effortless ….. so easy.

**screams excitedly and jumps around the room**





Bouncing into a hearing life

16 04 2011

Photobucket

It was time for my one year annual review of my cochlear implant.  I was so excited. I was so hungry for improvements in my hearing. I have been so delighted every time I heard a new sound, understood it, and passed another milestone. It had been 9 months since my last assessment, and I was hoping to come out with good results this time.

I saw my speech therapist and we talked about my new hearing in general. She took me to the soundproof booth and tested my hearing.  Mid-way through testing, I had to ask her to shut both doors to the soundproof booth (there are two in one doorway) as I could hear people talking. It turned out that my hearing had, yet again, improved overall, with a dip at the high frequency end. The blue line shows my current hearing level compared to my hearing before I received my cochlear implant.

Photobucket

Then we carried out some speech perception tests;

1. City University of New York (CUNY) sentences which is a list of simple sentences such as ‘The dog bit the postman’. Guessing from context and rhythm, or top-down processing, means I can gain more marks. The first time I took this test, a year ago, I scored 40%. I took it again 3 months after activation, and scored 48%.

2. A much harder test, the monosyllabic consonant-nucleus-consonant (CNC) test which is a list of single words. There is no context so I can’t guess, although marks are given for getting bits of words right. My score on this test, after activation, was 33%.

I had to sit at a measured distance in front of a speaker and listen to a recorded voice at 70 dB. This time, I’d had 3 hours sleep the night before, I was numb with tiredness, and I was totally thrown by how loud and deep the speaker’s voice was. BOOM-BOOM-BOOM. Whoa. I almost fell off my chair.

This time, I was hugely disappointed to score 40% on listening to sentences and 33% on listening to phonemes, although I scored 98% on lipreading with sound. I had put sooo much rehabilitation work in over the past year and I felt so deflated. I felt as if I had gone back to square one. This cochlear implant thing sure is hard work.

I then met with my audiologist to have an adjustment made to my cochlear implant settings (often called a ‘mapping’).

Each electrode was tested for comfort. I heard tones ranging from very low to very high. The audiologist increased the volume of each tone (electrode) until I was comfortable with the level of loudness for each one.

Listen up ladies and gentlemen! The operative word here, the key word, is COMFORTABLE.

Not … ‘As loud as you can stand it’.

Not … ‘As loud as you would like it to be’.

Not … ‘Louder is better, just like the hearing aid’.

It turned out that my cochlear implant had been too loud – for a year! Waaay too loud. She gave me a new setting with the volume turned way down, and another setting with the volume a little louder, just in case I found the quietest setting too quiet. This makes me wonder about the efficacy of cochlear implant adjustments / mappings. I would like to see NRI (Neural Response Imaging) used more, or a better way of testing comfort and maximum threshold levels. I find it very difficult to tell the difference between loud and louder.

Myles de Bastion wrote an excellent essay explaining the problems with Bamford-Kowal-Bench testing for cochlear implant candidacy.

The balance of the frequencies on my cochlear implant was also adjusted, so the lower frequencies are now boosted and the higher frequencies are now lowered quite a lot. I can happily listen to crinkly plastic bags and screaming children now. My audie used the Harmony listening check to test my T-mic (microphone) to ensure it was fully functioning and enabling me to hear as well as I could. That got the all-clear.

Onto my rehabilitation ….

I’ve been pushing the auditory verbal therapy as I feel this is the golden key that will unlock my mind to understanding what I’m hearing. I had popped in to see my audie for an adjustment a few weeks ago, and after that, I was able to tell the difference between the Ling sounds OO and EE. The Ling 6 sounds are the sounds that lie within the speech spectrum of hearing, and they are M, OO, AH, EE, SH, S. I had persuaded my speech therapist to give me some free auditory verbal therapy and a week after my annual review, she gave me an AVT session (this is my second AVT session) and tested me on my listening skills, reading 3 passages from ‘The Emperor’s New Clothes’. I scored 82.5%, 83%, and 85%. HOWZAT! Much, much better than in the soundproof booth with that godawful speaker!

*Twirl*

I’m happy with my new volume setting. I’ve been cringing so much lately that my shoulders are around my ears and I’ve been ratty with the over stimulation or ‘sound hangover’, snapping at people when they clatter cutlery, unpack a bag, or just generally make too much noise for my liking. Nothing makes me cringe any more. I’m now listening to talking books every chance I get – at work (tee hee), when shopping, walking the dog – not just on my commute. I figure that if I push the speech just like I did with the music, it will drive the momentum forward and get me there quicker. My interpreter is reading to me daily from a children’s book, Billionaire Boy by David Walliams – it’s quite funny and a challenging listen, as I can’t guess as much as I would with a conventional story….

‘The carpets were made from mink fur, he and his dad drank orange squash from priceless antique medieval goblets, and for a while they had a butler called Otis who was also an orang-utan.’

I’ve also lined up more from David Walliams, The Boy in the Dress and Mr Stink, and more of Harry Potter has gone onto my poor iPod, which is working over time. Jacqueline, my auditory verbal therapist, recommended a book, Bounce by Matthew Syed, a three-time Commonwealth table-tennis champion. The book explains the rationale behind success, how the key to achieving greatness lies in hard work, the right attitude, and training. This book is now on my reading list.

I’m also taking part in a clinical study at my hospital, which is looking at factors affecting audio-perception with cochlear implants. The purpose of this study is to determine if cochlear implant sound processors can be adapted to improve speech perception. The team will make changes to my programme that are intended to improve pitch discrimination based on my discrimination ability and evaluate this with speech perception. I went in to do the first round of tests yesterday. They hooked me up to their computer and asked me to carry out a pitch discrimination task, listening to pairs of sounds – the same sounds used in mapping. I had to say which sound was higher in pitch. Each sound is a separate electrode on my implant being stimulated and this was done for all the electrodes, to work out which ones give the clearest pitch and if there are electrodes which sound the same. Then I was asked to do the same task again, but this time with volume, guessing which sounds are louder, softer, or the same. These tests were incredibly difficult. I scored 100% in some areas, and fairly badly in others. So the team have now set a baseline  of my discrimination ability to work from, and will be able to evaluate how much benefit my auditory verbal therapy will give me. In July I will return for another set of tests and will be given a program on my processor to try out. So I’m really trying to push the boundaries of my brain’s ability to hear, to make this cochlear implant as successful as it can be for me.

And … now for the best news of all. I’ve  been given all my birthday presents at once. My cochlear implant team informed me that I’ve been approved to go bilateral. As the HiRes 90k implant is now back in production and available in the UK, I’m in the queue and hope to get my second bionic ear this summer. Two ears, even if one is very new and not performing as well as the older implant, will give me better hearing overall due to the concept of synergy. So I will benefit from two ears fairly quickly, I won’t have to wait for a year or so to benefit from the newer one.  With two good ears, I will be able to detect direction of sound, hear in noise, eliminate the head shadow effect, and hopefully do a pretty good fake of a hearing person.

I CAN’T WAIT, CAN’T WAIT, CAN’T WAIT!





Please, Audie, can I have some more?

18 08 2010

I had been told I had a 9 month wait until my next mapping session and I wasn’t too happy when I discovered Michele has a mapping session every month at her hospital, over a year after her activation. Apparently my large London hospital is too busy implanting children. In an attempt to jump-start my hearing’s learning curve, I popped in for a new mapping.

My settings were (Fidelity 120 HighRes-S);

Program slot 1: IDR60, ClearVoice medium, 100% T-mic

Program slot 2: IDR70, no ClearVoice, 100% T-mic

Program slot 3: IDR60, no ClearVoice, 50% T-mic

I asked for these to be changed to;

Program slot 1: IDR70, ClearVoice medium, 100% T-mic (my everyday program)

Program slot 2: IDR80, no ClearVoice, 100% T-mic (for music)

Program slot 3: IDR70, ClearVoice high, 50% T-mic (for noisy places)

I wanted an IDR of 80 to improve my enjoyment of music. I like an IDR of 70 as I can hear the lower tones in traffic and people’s voices. The default IDR for Advanced Bionics is 60 but I found this a little flat once hearing through a cochlear implant felt normal (about 3 months). Not everyone likes a high IDR and it can take some getting used to. The IDR is the Input Dynamic Range of the cochlear implant processor. This is not the same as the volume.

Imagine, if you will, you have been locked in a dark room for all of your life. The curtains are opened and bright sunlight floods in, diffused through the partially open window blind. At first, the light blinds you. It’s so stange and so bright that you can’t see. This is what it’s like at the cochlear implant activation. People who have been blind for a shorter time find it easier to cope with than those who have been blind all of their lives. Your eyes gradually adjust and you can make more sense of what you see – colours, outlines, contrast, etc. Then you open the curtains a little more – this is like increasing the volume of the cochlear implant. The window blind, behind the curtains, is pulled up a little so you can see more outside. Opening the blind is like increasing the IDR. You gain a wider range of sight and can see more objects, which get clearer the more you look at them. Or, you can hear a wider range of sounds and the more you listen, the more distinct and individual they become.

ClearVoice is an amazing additional program which automatically softens background noise so I can hear speech. I like using ClearVoice high on the London streets and train stations. Today I used it in the office as we had roadworks outside and the loud drill was horrific – my poor colleagues had headaches but I just flipped a switch. *grins*

I had another hearing test and showed a slight improvement. It’s amazing – and very surreal – to have continually improving hearing!

Photobucket

The lines on the audiogram show my hearing, from the bottom line up;

1: Black line: February 2010, before my cochlear implant

2: Green line: April 2010, 2 weeks after switch-on

3: Blue line: June 2010, 3 months after switch-on

4: Pink line: August 2010, 5 months after switch-on

Although I now have good hearing, my brain cannot process all of this information. It’s too new and too much data. I’ve been very deaf all of my life and this new information is not going to get sorted and filed in a matter of weeks! The likely timeline for optimum performance from a cochlear implant for a born-deaf candidate is 2 years. It is hard to watch other people do so well so quickly, but this is why I celebrate every little milestone – because it IS progress, and a snail can win the race just as well as the hare – it just takes longer.

I’m picking out the odd sentence or even paragraph in my audio book. In real life – much harder – I’m starting to pick out a few words here and there. I went shopping last weekend and asked the assistant if he could unlock the changing room for me. He said ‘Follow me’, unlocked the door, and I thanked him. He replied ‘You’re welcome’. As is often typical for a hearing person, he had not been looking at me when talking. Yet I managed to understand what he said. I was thrilled! I’ve heard people say ‘Excuse me’ when they shove move past – not something I’ve heard before either.

Last weekend, I took a train and on the hour long journey, I had great fun listening to all the announcements, to see how much I could understand. I could hear about 70% of them. I met a friend in a noisy supermarket cafe and had no problem having a conversation with her (unthinkable in my hearing aid days). I then caught a train back, on a different network, but the announcements were not as loud or clear. Today, I’ve been able to hear quite a few of the announcements on the London underground.

I’ve been practising using the phone for a week or so. The success of this varies according to vocabulary, clarity of the speaker, tone of voice, accent, type of phone being used, background niose ….. so this is still quite a hill I have to climb. I’m starting off with very simple sentences rather than conversations.

Our cat Hussy miaows all the time. She cries for food, attention, whatever. In 2 years, I have never heard her miaow. A couple of days ago I saw her come up to the doorway and this huge MIAOOOOW smacked me between the eyes. Or should I say, between the ears. I was stunned; the loudness and clarity took my breath away.

I haven’t been able to hear the smoke alarm for a few years and when the placement officer from Hearing Dogs came to visit a couple of weeks ago, she tested it for Smudge. EEEEEeeeeeeeEEEEEEEEeeeeeeeEEEEEEEeeee – it is a totally disgusting sound! Other surprising sounds in the home have been the extraction fan and bolognese sauce bubbling on the cooker.

Last week I went for a drink and sat outside talking to a friend at a pavement café – again, unthinkable with hearing aids. It was on Goodge Street which was very noisy indeed with rush hour traffic. With my cochlear implant, I had very little difficulty following the conversation. I was prompted to put my hearing aid in my other ear – not having looked at it for months – and was assaulted by an indescribable wall of loud meaningless sound. I thought a number of police sirens had suddenly started somewhere but there was just ordinary traffic. Nothing was clear and all the sounds were blended together. After a minute I took it off and I had a thumping headache. My head felt as if it had been kicked really hard on the hearing aid side, it actually throbbed with the pain.

It’s been an interesting 5 months. I’m hearing sounds I never realised existed. I’m enjoying sounds I’ve never heard before. I’m feeling so much less stressed with communication. I do have ‘off’ days when I feel as if life is too loud, or I only have half a head of hearing. It’s early days though and this will go away in time.  I’m so glad I took the road less travelled, I’m starting to reap what I and the medical team have sowed, and actually, I’m thinking of getting a second cochlear implant. Hell – I’d get a third if they could find somewhere to put it!





Cochlear implant mapping

4 05 2010

I’ve had 5 “mapping” sessions since activation. A mapping is a reprogramming of the cochlear implant, to readjust the electrical stimulation limits of the electrodes as each user’s brain adapts to sound and fibrous tissue grows over the internal implant. Mappings are typically carried out once a week for the first 6 weeks, then every few months, then twice annually.

At each mapping I was given increased volume and it was an opportunity to address any concerns with the audiologist. This was followed by a coffee break in the hospital cafe then a speech therapy session. I have one more mapping session this week, then my next one is in June when I have my 3 month check.

It’s been a rollercoaster ride. I’ve started with beeps and pushed so hard that I got a constant whine when I put the implant on. This set me back and I had to slowly build up my tolerance again of high frequency sounds from zero, bit by bit, and have successfully avoided a reocurrence of the whine. I have not yet reached full volume, there is still some way to go, which is kind of scary. I found last week quite difficult as everything seemed too loud and I started feeling stressed, but I hung in there and carried on wearing the cochlear implant until I got used to the increased sound levels.

Increased sound levels can be problematic for cochlear implant users because they are more sensitive to loudness changes. A normal hearing person can hear a wide range of sounds from very soft whispers to loud rock bands; this dynamic range of hearing is about 120dB (normal speech is within the 40-60dB range). However, a cochlear implant processor’s input dynamic range (IDR) or sound window is limited to an electrical current of 20dB, and 120dB of sound needs to be compressed into this. Therefore the cochlear implant user is more sensitive to changes in loudness than a hearing person.

If the IDR is small, sounds outside the IDR need to be omitted or compressed; sounds that are too quiet will be cut off, and sounds that are too loud will be compressed and will sound distorted. The 3 main brands of cochlear implants have different IDRs; Advanced Bionics has 80dB, MedEl 55dB, and Cochlear 45dB but with autoranging. I currently have my IDR set at 60dB.

What actually happens in a mapping session? I replace my processor’s battery with a direct connect input lead to the audie’s computer and put the processor back on my head. (Yeah, this freaked ME out the first time I did this).

The audie’s software will reprogramme my implant’s Harmony external processor.

My cochlear implant has 16 electrodes and when each one is stimulated, I will sense each stimulation as a beep.
The audie will set the Threshold (T) levels [to access soft speech and environmental sounds] and Comfort (M) levels [the amount of electrical current required to perceive a loud signal] for each electrode by judging the most comfortable and the highest stimulation I can tolerate – the most comfortable and loudest beeps I am happy to listen to.

I use the loudness scaling chart to indicate to the audiologist which level each stimulation correlates to, ranging from ‘Just Audible’ to ‘Too Loud’.

Then the audie ensures the M levels are similar in terms of my perception, so that the volume is the same in each electrode – I was able to tolerate very high levels of high frequency sounds this week but she brought these back down, otherwise everything would have sounded weird and unbalanced.

This mapping method is rather tedious and drawn out over several months. Clarujust is new software (currently in FDA trial) from Audigence Inc where the patient and processor are interfaced to a computer and words are played and repeated as heard, the software adjusting the map accordingly. Mapping this way reportedly takes 30 minutes. This software can be used by all hearing aid and cochlear implant companies except Advanced Bionics, however Phonak signed up with Audigence Inc this year prior to the Advanced Bionics/Sonova acquisition.

When a mapping is new, it tends to sound louder, until I get used to it. It takes 3 days to get used to a new mapping, then I find loud sounds have become softer and more tolerable, and I can hear a wider range of soft sounds. It is uncomfortable turning up the volume of life to the max every few days. I still have to brace myself for the jolt first thing in the morning, making the transition from complete silence to a full technicolour river of loud sounds pouring into my brain. Amanda’s Tip of the Day: If you wake with a hangover, take your time to put on your CI and turn down the volume. It helps. A little.

It’s an amazing learning process as I am also trying to identify sounds as well, discovering amazing new ones, and learning to discriminate between things that sound similar to me. My hearing is like a baby, it needs time to learn and grow, but it can be fun too.

Erber’s model set forth 4 levels of auditory skill development;

    Awareness of sound (presence v absence)
    Discrimination (same v different)
    Recognition (associating sound/word with meaning)
    Comprehension (using listening for understanding)

I have now reached the second level, I am hearing things but finding it difficult to discriminate between some sounds. Obviously, this means I am still lipreading. In my speech therapy session this week, I discovered I can’t distinguish between Gear v. Dear, tIme v. tAme. I can listen and fill in a word missing in a written sentence, but listening to a whole sentence and being given one written word is more difficult. With hearing aids, both tasks would have been impossible.

In addition to mapping, my progress is occasionally evaluated with an audiogram and speech perception performance with the cochlear implant in a soundproof booth. These tests assist the mapping process and indicate any further adjustments required. I expect I’ll have this done this week, and hope to have improved upon the 18% I achieved in my last speech perception test.

I was programmed with ClearVoice last week but am still adjusting to my new mapping, so I have just been ‘tasting’ this wonderful addition. I tried it on the train; the roar of the train engine and clashing sounds (brakes or pressure pads? – haven’t worked this sound out yet) dropped away significantly and I could clearly hear voices around me. It was awesome. Yesterday, I was sitting by a window and became conscious of this sound. I realised it was the rain spitting outside. In the garage, I could hear the drumming of the rain on the roof and the traffic outside. With ClearVoice on, the traffic became very quiet and the rain became a very clear high PLINK PLINK PLINK, and a lower PLONK PLONK when it came through a hole in the roof and landed on an object. Again, awesome!

Try out the ClearVoice demo for yourself. Don’t forget to say the mandatory WOW!

Shanti is waiting for her cochlear implant operation date and works as a personal trainer and complementary therapist. She gave me a super aromatherapy massage yesterday and I left feeling very relaxed. As soon as I left, I plugged into my iPod and was amazed to hear that the tinny / Donald Duck tone of vocals had gone from a lot of songs. Perhaps there is a link between relaxation and better hearing. Today, voices sound largely normal and it’s so so so NICE to have some normality again!

Photos courtesy of Byron





Jumping the banana

8 04 2010

Having been assessed as deaf enough for a cochlear implant, and passing the associated tests, I was wondering how much of an improvement in hearing the implant has given me.  Lots of new high frequency sounds have been popping up whilst low frequency sounds have only just started coming back. It has been exactly two weeks since my cochlear implant has been activated and my world has certainly changed in that short time.

I went for another mapping session to increase the volume and tweak the settings. I can hear music fairly well, rock and piano music sound scratchy with the singer sounding as if he has laryngitis. Not a good sound. However, I discovered that opera sounds good and there is plenty of that on YouTube. I am able to follow a melody and detect when there are words, but not understand them. Japanese music also sounds passable at the moment. I spent a long time on iTunes trying out different styles of music to discover what was pleasant to listen to, as I believe in the power of music to help achieve great things. I have purchased Ravel’s Bolero, Grooploop – Piano (Japanese Animation: Studio Ghibli Soundtrack), John Kaizan Neptune & Také Daké : Asian Roots.

Remember my activation video, two weeks ago, when I couldn’t hear the bells? I can hear those bells now, no problem! I can hear the blackbirds in the trees outside my house. I am now hearing the trains and traffic that have been missing for the past two weeks, as my brain focused on the new high frequency sounds. The tinnitus has largely disappeared in my bionic ear but is still present in my unaided ear (mostly mad musical performances).

Today, I had a hearing test, and I’m jumping bananas. Speech bananas, that is. Whoop!

A speech banana is the shape on the audiogram of all the sounds of speech or phonemes of all the world’s languages. I have never been able to hear most of the speech sounds within the banana, even with hearing aids, so I relied on lipreading and became very good at this.

My new audiogram had my jaw hitting the floor in shock. For once, I was speechless.

***

Blue circles : my left ear as it was up until two weeks ago.
Green circles : bilateral hearing with hearing aids (Note: bilateral scores higher than unilateral)
Red circles : left ear with cochlear implant.
NR : No Response

I have yet to be tested wearing a cochlear implant plus hearing aid together, which will increase my speech comprehension scores.

CUNY test (lipreading with sound) > Nov 2009 : 91%

CUNY test (lipreading with sound) > Apr 2010 : 85%

BKB test (listening only) > Nov 2009 : 24%

BKB test (listening only) > Apr 2010 : 18%

As everyone still sounds like Donald Duck, I was not expecting to do very well in my speech perception tests. With daily practice at listening, these figures will improve. I also have yet to receive a lot more increase in sound stimulation, I’m about half way to my target in this respect. So …. *shrugs*

So why am I showing good results in the hearing test, yet a poor performance in speech perception? And how do these results translate to the real world? A hearing test is always carried out in a soundproof room. A hearing test is performed using pure tones, but speech is made up of complex sounds. Add background sounds in the real world, and speech perception becomes even more difficult. Add the cocktail party effect, and it’s not easy to manage in noisy environments with a hearing loss, as you then lose the ability to discern speech in noise. Such measurements however do offer audiologists a way of measuring progress. I now have a baseline to start from (BKB 18%, CUNY 85%) and can monitor my progress against my new figures. Later on, I also expect to be tested in noise. In the real world, Advanced Bionic’s new ClearVoice software is going to help me with the cocktail party effect, and I expect to get this next month.

Donald Duck aside, it is going to take some time for my brain to decode the new stimulation, especially in the high frequencies where I had no sensation before.  An analogy would be (re)learning the roman alphabet, only to have written French instead of English on the page. According to my audiologist, the best strategy to manage this is to practice daily listening to an audio book and follow the unabridged text at the same time. I have collated several links for such rehabilitation work on my cochlear implant rehabilitation page. My audie says, the more work I do, the more new pathways my brain will create, and the better my brain will become at deciphering the strange new sounds. For someone who was born deaf, like me, this process can take up to two years. I need to remember three key things – patience, persistence and practice. It really has helped to have a mentor throughout the whole process to ask questions of, to give you encouragement at every stage of your cochlear implant journey. You can find a UK mentor here and a US mentor here.

My world is opening up and all the colour is flooding in – at last!

(I still can’t get over that darn audiogram!)