精品深夜AV无码一区二区_伊人久久无码中文字幕_午夜无码伦费影视在线观看_伊人久久无码精品中文字幕

COM6511代寫、Python語(yǔ)言編程代做

時(shí)間:2024-05-09  來(lái)源:  作者: 我要糾錯(cuò)



COM4511/COM6511 Speech Technology - Practical Exercise -
Keyword Search
Anton Ragni
Note that for any module assignment full marks will only be obtained for outstanding performance that
goes well beyond the questions asked. The marks allocated for each assignment are 20%. The marks will be
assigned according to the following general criteria. For every assignment handed in:
1. Fulfilling the basic requirements (5%)
Full marks will be given to fulfilling the work as described, in source code and results given.
2. Submitting high quality documentation (5%)
Full marks will be given to a write-up that is at the highest standard of technical writing and illustration.
3. Showing good reasoning (5%) Full marks will be given if the experiments and the outcomes are explained to the best standard.
4. Going beyond what was asked (5%)
Full marks will be given for interesting ideas on how to extend work that are well motivated and
described.
1 Background
The aim of this task is to build and investigate the simplest form of a keyword search (KWS) system allowing to find information
in large volumes of spoken data. Figure below shows an example of a typical KWS system which consists of an index and
a search module. The index provides a compact representation of spoken data. Given a set of keywords, the search module
Search Results
Index
Key− words
queries the index to retrieve all possible occurrences ranked according to likelihood. The quality of a KWS is assessed based
on how accurately it can retrieve all true occurrences of keywords.
A number of index representations have been proposed and examined for KWS. Most popular representations are derived
from the output of an automatic speech recognition (ASR) system. Various forms of output have been examined. These differ
in terms of the amount of information retained regarding the content of spoken data. The simplest form is the most likely word
sequence or 1-best. Additional information such as start and end times, and recognition confidence may also be provided for
each word. Given a collection of 1-best sequences, the following index can be constructed
w1 (f1,1, s1,1, e1,1) . . . (f1,n1 , s1,n1 , e1,n1 )
w2 (f1,1, s1,1, e1,1) . . . (f1,n1 , s1,n1 , e1,n1 )
.
.
.
wN (fN,1, sN,1, eN,1) . . . (fN,nN , sN,nN , eN,nN )
(1)
1
where wi is a word, ni is the number of times word wi occurs, fi,j is a file where word wi occurs for the j-th time, si,j and ei,j
is the start and end time. Searching such index for single word keywords can be as simple as finding the correct row (e.g. k)
and returning all possible tuples (fk,1, sk,1, ek,1), . . ., (fk,nk , sk,nk , ek,nk ).
The search module is expected to retrieve all possible keyword occurrences. If ASR makes no mistakes such module
can be created rather trivially. To account for possible retrieval errors, the search module provides each potential occurrence
with a relevance score. Relevance scores reflect confidence in a given occurrence being relevant. Occurrences with extremely
low relevance scores may be eliminated. If these scores are accurate each eliminated occurrence will decrease the number of
false alarms. If not then the number of misses will increase. What exactly an extremely low score is may not be very easy
to determine. Multiple factors may affect a relevance score: confidence score, duration, word confusability, word context,
keyword length. Therefore, simple relevance scores, such as those based on confidence scores, may have a wide dynamic range
and may be incomparable across different keywords. In order to ensure that relevance scores are comparable among different
keywords they need to be calibrated. A simple calibration scheme is called sum-to-one (STO) normalisation
rˆi,j = r
γ
 
i,j
ni
k=1 r
γ
i,k
(2)
where ri,j is an original relevance score for the j-th occurrence of the i-th keyword, γ is a scale enabling to either sharpen or
flatten the distribution of relevance scores. More complex schemes have also been examined. Given a set of occurrences with
associated relevance scores, there are several options available for eliminating spurious occurrences. One popular approach
is thresholding. Given a global or keyword specific threshold any occurrence falling under is eliminated. Simple calibration
schemes such as STO require thresholds to be estimated on a development set and adjusted to different collection sizes. More
complex approaches such as Keyword Specific Thresholding (KST) yield a fixed threshold across different keywords and
collection sizes.
Accuracy of KWS systems can be assessed in multiple ways. Standard approaches include precision (proportion of relevant retrieved occurrences among all retrieved occurrences) and recall (proportion of relevant retrieved occurrences among all
relevant occurrences), mean average precision and term weighted value. A collection of precision and recall values computed
for different thresholds yields a precision-recall (PR) curve. The area under PR curve (AUC) provides a threshold independent summative statistics for comparing different retrieval approaches. The mean average precision (mAP) is another popular,
threshold-independent, precision based metric. Consider a KWS system returning 3 correct and 4 incorrect occurrences arranged according to relevance score as follows: ✓ , ✗ , ✗ , ✓ , ✓ , ✗ , ✗ , where ✓ stands for correct occurrence and ✗ stands
for incorrect occurrence. The average precision at each rank (from 1 to 7) is 1
1 , 0
2 , 0
3 , 2
4 , 3
5 , 0
6 , 0
7 . If the number of true correct
occurrences is 3, the mean average precision for this keyword 0.7. A collection-level mAP can be computed by averaging
keyword specific mAPs. Once a KWS system operates at a reasonable AUC or mAP level it is possible to use term weighted
value (TWV) to assess accuracy of thresholding. The TWV is defined by
TWV(K, θ) = 1 −
 
1
|K|
 
k∈K
Pmiss(k, θ) + βPfa(k, θ)
 
(3)
where k ∈ K is a keyword, Pmiss and Pfa are probabilities of miss and false alarm, β is a penalty assigned to false alarms.
These probabilities can be computed by
Pmiss(k, θ) = Nmiss(k, θ)
Ncorrect(k) (4)
Pfa(k, θ) = Nfa(k, θ)
Ntrial(k) (5)
where N<event> is a number of events. The number of trials is given by
Ntrial(k) = T − Ncorrect(k) (6)
where T is the duration of speech in seconds.
2 Objective
Given a collection of 1-bests, write a code that retrieves all possible occurrences of keyword list provided. Describe the search
process including index format, handling of multi-word keywords, criterion for matching, relevance score calibration and
threshold setting methodology. Write a code to assess retrieval performance using reference transcriptions according to AUC,
mAP and TWV criteria using β = 20. Comment on the difference between these criteria including the impact of parameter β.
Start and end times of hypothesised occurrences must be within 0.5 seconds of true occurrences to be considered for matching.
2
3 Marking scheme
Two critical elements are assessed: retrieval (65%) and assessment (35%). Note: Even if you cannot complete this task as a
whole you can certainly provide a description of what you were planning to accomplish.
1. Retrieval
1.1 Index Write a code that can take provided CTM files (and any other file you deem relevant) and create indices in
your own format. For example, if Python language is used then the execution of your code may look like
python index.py dev.ctm dev.index
where dev.ctm is an CTM file and dev.index is an index.
Marks are distributed based on handling of multi-word keywords
• Efficient handling of single-word keywords
• No ability to handle multi-word keywords
• Inefficient ability to handle multi-word keywords
• Or efficient ability to handle multi-word keywords
1.2 Search Write a code that can take the provided keyword file and index file (and any other file you deem relevant)
and produce a list of occurrences for each provided keyword. For example, if Python language is used then the
execution of your code may look like
python search.py dev.index keywords dev.occ
where dev.index is an index, keywords is a list of keywords, dev.occ is a list of occurrences for each
keyword.
Marks are distributed based on handling of multi-word keywords
• Efficient handling of single-word keywords
• No ability to handle multi-word keywords
• Inefficient ability to handle multi-word keywords
• Or efficient ability to handle multi-word keywords
1.3 Description Provide a technical description of the following elements
• Index file format
• Handling multi-word keywords
• Criterion for matching keywords to possible occurrences
• Search process
• Score calibration
• Threshold setting
2. Assessment Write a code that can take the provided keyword file, the list of found keyword occurrences and the corresponding reference transcript file in STM format and compute the metrics described in the Background section. For
instance, if Python language is used then the execution of your code may look like
python <metric>.py keywords dev.occ dev.stm
where <metric> is one of precision-recall, mAP and TWV, keywords is the provided keyword file, dev.occ is the
list of found keyword occurrences and dev.stm is the reference transcript file.
Hint: In order to simplify assessment consider converting reference transcript from STM file format to CTM file format.
Using indexing and search code above obtain a list of true occurrences. The list of found keyword occurrences then can
be assessed more easily by comparing it with the list of true occurrences rather than the reference transcript file in STM
file format.
2.1 Implementation
• AUC Integrate an existing implementation of AUC computation into your code. For example, for Python
language such implementation is available in sklearn package.
• mAP Write your own implementation or integrate any freely available.
3
• TWV Write your own implementation or integrate any freely available.
2.2 Description
• AUC Plot precision-recall curve. Report AUC value . Discuss performance in the high precision and low
recall area. Discuss performance in the high recall and low precision area. Suggest which keyword search
applications might be interested in a good performance specifically in those two areas (either high precision
and low recall, or high recall and low precision).
• mAP Report mAP value. Report mAP value for each keyword length (1-word, 2-words, etc.). Compare and
discuss differences in mAP values.
• TWV Report TWV value. Report TWV value for each keyword length (1-word, 2-word, etc.). Compare and
discuss differences in TWV values. Plot TWV values for a range of threshold values. Report maximum TWV
value or MTWV. Report actual TWV value or ATWV obtained with a method used for threshold selection.
• Comparison Describe the use of AUC, mAP and TWV in the development of your KWS approach. Compare
these metrics and discuss their advantages and disadvantages.
4 Hand-in procedure
All outcomes, however complete, are to be submitted jointly in a form of a package file (zip/tar/gzip) that includes
directories for each task which contain the associated required files. Submission will be performed via MOLE.
5 Resources
Three resources are provided for this task:
• 1-best transcripts in NIST CTM file format (dev.ctm,eval.ctm). The CTM file format consists of multiple records
of the following form
<F> <H> <T> <D> <W> <C>
where <F> is an audio file name, <H> is a channel, <T> is a start time in seconds, <D> is a duration in seconds, <W> is a
word, <C> is a confidence score. Each record corresponds to one recognised word. Any blank lines or lines starting with
;; are ignored. An excerpt from a CTM file is shown below
7654 A 11.34 0.2 YES 0.5
7654 A 12.00 0.34 YOU 0.7
7654 A 13.30 0.5 CAN 0.1
• Reference transcript in NIST STM file format (dev.stm, eval.stm). The STM file format consists of multiple records
of the following form
<F> <H> <S> <T> <E> <L> <W>...<W>
where <S> is a speaker, <E> is an end time, <L> topic, <W>...<W> is a word sequence. Each record corresponds to
one manually transcribed segment of audio file. An excerpt from a STM file is shown below
2345 A 2345-a 0.10 2.03 <soap> uh huh yes i thought
2345 A 2345-b 2.10 3.04 <soap> dog walking is a very
2345 A 2345-a 3.50 4.59 <soap> yes but it’s worth it
Note that exact start and end times for each word are not available. Use uniform segmentation as an approximation. The
duration of speech in dev.stm and eval.stm is estimated to be 57474.2 and 25694.3 seconds.
• Keyword list keywords. Each keyword contains one or more words as shown below
請(qǐng)加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp




















 

標(biāo)簽:

掃一掃在手機(jī)打開當(dāng)前頁(yè)
  • 上一篇:EBU6304代寫、Java編程設(shè)計(jì)代做
  • 下一篇:COM4511代做、代寫Python設(shè)計(jì)編程
  • 無(wú)相關(guān)信息
    昆明生活資訊

    昆明圖文信息
    蝴蝶泉(4A)-大理旅游
    蝴蝶泉(4A)-大理旅游
    油炸竹蟲
    油炸竹蟲
    酸筍煮魚(雞)
    酸筍煮魚(雞)
    竹筒飯
    竹筒飯
    香茅草烤魚
    香茅草烤魚
    檸檬烤魚
    檸檬烤魚
    昆明西山國(guó)家級(jí)風(fēng)景名勝區(qū)
    昆明西山國(guó)家級(jí)風(fēng)景名勝區(qū)
    昆明旅游索道攻略
    昆明旅游索道攻略
  • 短信驗(yàn)證碼平臺(tái) 理財(cái) WPS下載

    關(guān)于我們 | 打賞支持 | 廣告服務(wù) | 聯(lián)系我們 | 網(wǎng)站地圖 | 免責(zé)聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 kmw.cc Inc. All Rights Reserved. 昆明網(wǎng) 版權(quán)所有
    ICP備06013414號(hào)-3 公安備 42010502001045

    精品深夜AV无码一区二区_伊人久久无码中文字幕_午夜无码伦费影视在线观看_伊人久久无码精品中文字幕
    <samp id="e4iaa"><tbody id="e4iaa"></tbody></samp>
    <ul id="e4iaa"></ul>
    <blockquote id="e4iaa"><tfoot id="e4iaa"></tfoot></blockquote>
    • <samp id="e4iaa"><tbody id="e4iaa"></tbody></samp>
      <ul id="e4iaa"></ul>
      <samp id="e4iaa"><tbody id="e4iaa"></tbody></samp><ul id="e4iaa"></ul>
      <ul id="e4iaa"></ul>
      <th id="e4iaa"><menu id="e4iaa"></menu></th>
      国产精品一区二区小说| 亚洲精品一区二区三区在线播放 | 黄色一级视频免费观看| 中文天堂资源在线| 91成年人网站| 国产性生活毛片| 久久精品99久久久久久| 天天色综合天天色| 91av在线免费| 最新天堂在线视频| 国产suv精品一区二区69| 亚洲天堂视频在线| 亚洲毛片欧洲毛片国产一品色| 91久久国语露脸精品国产高跟| 亚洲最大免费视频| 午夜一区二区三区四区| 日韩av网站在线播放| 成人精品在线播放| 国产一级淫片久久久片a级| 99久久免费国产精精品| 一级黄色免费网站| 蜜臀久久精品久久久用户群体| 久久露脸国语精品国产91| 欧美一级淫片免费视频魅影视频| 深爱五月激情五月| 97超碰国产在线| 午夜精品久久久久久久99热影院| 人人妻人人澡人人爽人人精品| 丰满人妻妇伦又伦精品国产| 国产视频第二页| 玖玖爱这里只有精品| 亚洲va综合va国产va中文| 99热在线只有精品| 久草视频在线观| 亚欧视频在线观看| 国产三级国产精品| 久久久久久久久久久97| 国产乱码一区二区| 亚洲图色中文字幕| www.久久伊人| 日韩黄色中文字幕| 99精品免费观看| 午夜不卡久久精品无码免费| 手机在线中文字幕| 中文字幕在线播放av| 91pony九色| 午夜免费福利网站| 日韩免费成人av| 久久精品久久99| 国产主播av在线| 日本特级黄色片| 亚洲一级视频在线观看| 久久精品国产成人av| 四季av一区二区| 一出一进一爽一粗一大视频| 日本美女视频一区| 免费无码毛片一区二区app| 黄色av网址在线| 欧美成人一二三区| 丝袜制服一区二区三区| 亚洲不卡免费视频| 在线免费观看av网址| 亚洲一区二区三区蜜桃| 亚洲永久精品一区| 国产三级在线观看视频| 日韩av片专区| 99热国产在线观看| 人妻视频一区二区| 亚洲乱熟女一区二区| 国产福利视频导航| 日韩欧美综合视频| 亚洲少妇一区二区| 人妻在线日韩免费视频| 噜噜噜久久,亚洲精品国产品| 亚洲国产成人精品综合99| 国产高清成人久久| wwwwww在线观看| www.国产精品视频| 国产激情视频在线播放| 91久久国语露脸精品国产高跟| 亚洲精品无码专区| 波多野结衣国产精品| 成人无码www在线看免费| www.久久视频| 精品视频第一页| 日韩乱码人妻无码中文字幕| 香蕉人妻av久久久久天天| 欧美成人乱码一二三四区免费| 香蕉人妻av久久久久天天| 国产吞精囗交久久久| 熟女少妇一区二区三区| 国产精品久久久久久在线| 香蕉久久国产av一区二区| 一区视频免费观看| 中文字幕+乱码+中文| 国产成人麻豆免费观看| 欧美三级小视频| 国产亚洲精品精品精品| 亚洲av无码一区二区二三区| 国产中文字幕一区二区| 中文字幕在线播放视频| 久久久久99精品成人片毛片| 国产精品成人aaaa在线| 天天色天天综合网| 鲁丝一区二区三区| 中文在线第一页| 久久久久亚洲av成人片| 国产精品久免费的黄网站| 亚洲av无码一区二区三区网址 | 国产乱码在线观看| 亚洲精品成人av久久| 亚洲精品网站在线| 一级黄色片网址| 加勒比一区二区| 日韩免费一二三区 | www亚洲色图| 精品1卡二卡三卡四卡老狼| 久久久久亚洲av片无码下载蜜桃| 精品国产aaa| 日本精品久久久久久| 午夜少妇久久久久久久久| 亚洲综合五月天婷婷丁香| 国产精品熟女视频| 日本中文字幕免费| 99精品视频免费版的特色功能| 精品人妻二区中文字幕| 欧美熟妇精品一区二区蜜桃视频| 99热这里只有精品9| 国产精品久久免费| 国产小视频你懂的| 中文字幕 欧美日韩| 好吊日在线视频| 亚洲国产视频一区二区三区| 久久久久人妻一区精品色| 亚洲精品久久久蜜桃动漫| 国产裸体无遮挡| 一区二区www| 国产一区二区在线不卡| 日本精品久久久久久| 国产精品久久久久久免费播放| 人妻激情偷乱视频一区二区三区 | 日韩一级片中文字幕| 亚洲图色中文字幕| 秋霞网一区二区三区| 国产又大又黄又粗| 91一区二区视频| 性猛交xxxx| 日本不卡一二区| 国产精品嫩草69影院| 免费观看黄色一级视频| 蜜桃色一区二区三区| 亚洲欧美日韩一区二区三区四区| 成人毛片一区二区三区| 日本一级一片免费视频| 91日韩中文字幕| 色欲狠狠躁天天躁无码中文字幕| 婷婷久久综合网| 国产免费高清av| 中文字幕在线2021| 女教师高潮黄又色视频| 337p日本欧洲亚洲大胆张筱雨| 久久久久久久九九九九| 91人妻一区二区三区| 欧美高清性xxxx| 国产欧美久久久| 国产第一页在线观看| 制服 丝袜 综合 日韩 欧美| 噜噜噜在线视频| 国产又粗又黄又爽的视频| 中文字幕二区三区| 少妇激情一区二区三区视频| 国产成人av一区二区三区不卡| 香蕉视频久久久| 亚洲第一精品在线观看| 欧美在线精品一区二区三区| 黄色一级大片在线免费观看| 91香蕉国产视频| 97人人模人人爽人人澡| 亚洲国产精品久| 亚洲精品国产成人av在线| 亚洲av色香蕉一区二区三区| 婷婷丁香花五月天| 在线观看中文字幕视频| 亚洲天堂视频在线| 97人妻一区二区精品视频| av五月天在线| 成人久久精品人妻一区二区三区| 国产精品视频第一页| 国产在线a视频| 日韩一区二区三区不卡| 亚洲国产精品久久久久久久| 超碰免费在线97| 欧美成人精品欧美一级| 天天综合网日韩| 懂色av粉嫩av蜜臀av一区二区三区| 国产三级国产精品国产国在线观看| 久久久久国产免费| 中文字幕日本人妻久久久免费| 国产 日韩 欧美 成人| 日本在线不卡一区二区|