Help:MediaSearch/ja

メディア検索 MediaSearch とはコモンズに導入された新規の検索用フロントエンドでバックの仕組みも同名で呼び、ウェブの画像検索エンジンのレイアウト同様、商品陳列棚のように画像を表示します. メディア検索 MediaSearch に関するご意見ご感想は コモンズのトークページに投稿してください.

特別:メディア検索を使い特定の画像が見つかる機会を最大化するには：
 * 題名を付けるときは内容を説明する適切な文字列にする
 * 適切なキャプションを、可能な限り多言語で付け、ファイルの内容を説明する
 * 説明は詳細にして、ファイルの主題とその他の実質的な価値のある情報を盛り込む
 * ファイルな関連性のあるカテゴリを付ける
 * 画面を見て読み取れる描写の明細をこまかく加える

以下はどんなデータが使えるか、ファイル検索にどのように役立つか、まとめました. 画像検索では主に2種類のデータを採用しています.
 * 全文
 * 1) 明細と構造化データ

How
これは昔から使われてきた文字による検索です. 解説文が検索語を含む場合はそのファイルがヒットします.

検索順位には2つの要素が影響します.
 * 用語の使用頻度
 * 用語の位置
 * Frequency of terms

検索アルゴリズムは検索結果の関連度の判定に検索用語の使用頻度を基準にします.

検索語が特定の文章 に書いてある頻度が高いほど関連度がより高い と受け止めます（例：特定の文章が他の文章に比べて単語「モナリザ」の出現頻度が高いならファイルの関連度は高いはず）. ）

検索語が すべての文章に 出現すればするほどる関連度が低い ことになります（例：「する」など様々な文章で多用される単語は関連度の順位にたいして役に立ちません. ）

英語版ウィキペディアでウィキ文として「Mona Lisa」で検索すると、「Louvre museum」の記事（ルーブル美術館、7回ヒット）よりも「Mona Lisa」という題名の記事（184回ヒット）のほうがおそらくは結果として優れていると発見する手がかりになります.

ところがコモンズの場合、検索語のヒット率は関連性の比較に必ずしも有効ではないという問題点があります. 長文の記事ではないものの、解説文があります. 特定の検索語は1、2回しか使われず、ほかに比較の決め手となる要素もほとんどありません. そのため順位判定には用語の位置も採用しました.


 * 用語の位置

There are multiple ways to input information about a file. They each contribute to the final relevance score, but in a different way.

Wikitext descriptions are historically considered most important in presenting file information, but they sometimes contain so much information that significant terms often don't stand out as much when it comes to search relevance. Alternatively, they sometimes contain very little information, which gives search little to work with to determine relevance.

For example, details like the author, the place or date that a media file was created, what museum it belongs to, or what license it is published under–while important–are often not the terms that people will search for. Furthermore, significant parts of a description are often "contextual" information, not pertaining directly to the main subject.

Also, while descriptions often contain a lot of information that can be very important in order to find the file, it can be hard to make out exactly what the file is about based on the terms in the description alone. Descriptions can be long (and even contain multiple languages and information that’s irrelevant to the search term). In other words, it is hard to determine relevance with descriptions.

Additional data that describes things in a more succinct way (such as titles, captions, categories) is often focused on highly specific information, which helps determine what's important in a media file–in other words, this data makes determining relevance easier. This is why the position of terms is important.

For example: when searching for "Mona Lisa," a file that contains "Mona Lisa" in the description alone will usually be ranked lower in search results' than one that also includes that term as part of the title and/or caption, and/or is added to (one of) the Mona Lisa categories.

However, note that duplicating information across fields in wikitext also may have the unintended consequence of lowering frequency-based relevance scores - so be sure to accurately describe the file by adding a relevant title, a detailed description, a caption (ideally in multiple languages), and the appropriate categories, without repeating the same information in multiple places.

Caveats
The aforementioned full-text search algorithm is very good, but has some issues as well - especially in our context:

In a traditional text-based search, users likely don't want to see results in other languages than the one they are searching in (the assumption is that the user wouldn't understand other languages). That's different on Commons, because people are not really looking for the descriptions –they want the file.
 * Language

So if a user searches for pictures of cars, ideally search would also find and return files that match in other languages, such as auto in Dutch or voiture" in French. But unless every image's descriptions and/or captions have translations for every language, text-based search will not find results in other languages.

An additional issue here is that while some words look the same in multiple languages, they may have different meanings. For example "gift" in English versus German, or "chat" in English as compared to French; these differences in language will return wildly different results in text-based search due to the change in meaning.

Similarly, when searching for a bat in text-based search, search will not find images where they're referred to by their scientific name: Chiroptera. This would also apply to acronyms, such as NYC when searching for New York City.
 * Synonyms

Similarly, a text description might contain a lot more implicit information that simply cannot be captured by scanning wikitext.
 * Word matches, not concepts

A British shorthair is also a cat and a Volvo V40 is a car, but unless their descriptions also explicitly mention cat or car, they won't be found under those terms in a traditional text-based search.

Statements and structured data
Wikidata statements have the potential of solving many of the aforementioned caveats of traditional text-based searches: they are multilingual, have aliases, and are linked to all sorts of related concepts.

How
Since the addition of the "Structured data" tab on file pages, it has been possible to attach Wikidata entities to a file, including statements about what the file "depicts."

Given a search term (like "anaconda"), we'll also search Wikidata for relevant entities. In this case, here are some of the top results:
 * Anaconda (Q483539): town in Montana
 * Eunectes (Q188622): genus of snakes
 * "Anaconda" (Q17485058): Nicki Minaj song

In addition to full text matching, search will also include results that have a "depicts" statement of (one or multiple of) these entities. It will also include results that have a "digital representation of" statement, used for artwork.

This has the potential of drastically expanding the amount of results returned, because entities already cover synonyms (via Wikidata aliases) and language differences (via labels & aliases in multiple languages): a file only needs to be tagged with one depicts statement per item, and search will be able to find that statement and any of its aliases or translations.

And when translations or aliases get added to those entities later on, files tagged with them will automatically benefit from it by now being discoverable under those terms as well. This is why it’s important to continue to enrich the entities added to depicts statements on Commons with more aliases, labels, and other information on Wikidata.

Note: not all entities are considered equally in search ranking. When searching for "iris", users are likely expecting to find multimedia that depicts the genus of plants (Q156901), or maybe the part of an eye (Q178748), but probably not Iris Murdoch, the British writer and philosopher (Q217495).

Based on the similarity to the search term and the importance/popularity of the entity, Media Search will boost multimedia with certain entities more than others.

Caveats
Wikidata entities are an excellent signal to help discover additional relevant multimedia:
 * there is less noise (e.g. text descriptions often contain false-positives like "iris" being the first name of the photographer, not the subject of the file).
 * they contain a lot more information (aliases & translations) than individual file descriptions ever can.
 * they can be enriched in one central location (Wikidata)

But they are also a poor indicator for relative ranking:
 * In a file with multiple depicts statements, it's hard to know which statements are the most important or relevant
 * Wikidata has many entities at varying levels of detail

In a file with multiple depicts statements, it's hard to know which statements are the most important or relevant.
 * Relative ranking

Are both equally important, or is one of them the obvious subject and the other a less relevant background detail? If so, which? Is a depicts statement on one file more prominent than the same depicts statement on another?

Consider the "Pale Blue Dot" photographs: even though the earth makes up less than a pixel in the image set, it's a significant feature of the images.

Statements essentially only have two states: something is in the file, or it is not. There is no further detail about just how relevant something is in that file.

The “mark as prominent” feature for statements is provided to address some of these issues, but it is not currently being used consistently. Additionally, the use of qualifiers like 'applies to part' could help improve ranking, but those qualifiers are currently rarely used at all on Commons, though they have precedent on Wikidata. For example, on the Wikidata item for Mona Lisa, the depicted elements have 'applies to part' qualifiers that specify foreground or background, which could provide additional signals to the search ranking algorithm if used on Commons.

While depicts statements are tremendously useful in helping surface additional relevant results, it's hard to use them as a ranking signal: textual descriptions often convey the relative importance of subjects better than these simple statements can.

Wikidata has many entities at varying levels of detail. While we are currently working towards being able to include "child concepts" in search results, it’s important to be careful in the weight we give to certain entities, especially when compared to full text search.
 * Level of detail

For example, the statements bridge (Q12280), suspension bridge (Q12570), Golden Gate Bridge (Q44440) or tourist attraction (Q570116) could probably all be used to describe a picture of the Golden Gate Bridge, but the Golden Gate Bridge (Q44440) statement already implies all of the others via its various related entities.

しかしながら、現実にはそう単純に解決しない事例が散見されます.

German Shepherd dog (Q38280) is a subclass of dog (Q144), which is a subclass of pet (Q39201) - in theory, we should be able to find pictures tagged with "German Shepherd dog" when one searches for "pet."

しかしながら、写真の中には、題名は「ジャーマンシェパード犬」なのに実は被写体は使役犬（Q1806324）であって愛玩犬ではない場合があり得ます.