Skip to content

Commit

Permalink
Merge pull request #128 from insurgent92/issue_40
Browse files Browse the repository at this point in the history
#40 : Translate 5.4 - paragraph 1 / 17
  • Loading branch information
visionNoob authored Oct 5, 2018
2 parents 7844ce4 + eb9e0f1 commit ba0b356
Showing 1 changed file with 32 additions and 23 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -2,23 +2,16 @@
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Using TensorFlow backend.\n"
]
},
{
"data": {
"text/plain": [
"'2.0.8'"
"'2.2.2'"
]
},
"execution_count": 1,
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
Expand All @@ -39,12 +32,36 @@
"\n",
"----\n",
"\n",
"It is often said that deep learning models are \"black boxes\", learning representations that are difficult to extract and present in a \n",
"human-readable form. While this is partially true for certain types of deep learning models, it is definitely not true for convnets. The \n",
"representations learned by convnets are highly amenable to visualization, in large part because they are _representations of visual \n",
"concepts_. Since 2013, a wide array of techniques have been developed for visualizing and interpreting these representations. We won't \n",
"survey all of them, but we will cover three of the most accessible and useful ones:\n",
"It is often said that deep learning models are \"black boxes\", learning representations that are difficult to extract and present in a human-readable form. \n",
"\n",
"While this is partially true for certain types of deep learning models, it is definitely not true for convnets. \n",
"\n",
"The representations learned by convnets are highly amenable to visualization, in large part because they are _representations of visual concepts_. \n",
"\n",
"Since 2013, a wide array of techniques have been developed for visualizing and interpreting these representations. \n",
"\n",
"We won't survey all of them, but we will cover three of the most accessible and useful ones:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Paragraph 1\n",
"# 컨볼루션이 무엇을 학습하고 있는지 시각화해보기 \n",
"이 Notebook은 [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff)의 5장 4절에 나오는 예제 코드가 포함되어 있습니다.\n",
"원본 텍스트(Deep Learning with Python)에 훨씬 더 많은 자료, 특히 추가 설명과 그림들이 포함되어 있습니다.\n",
"여기에서는 예제 코드와 코드에 관련된 설명만 제공됩니다.\n",
"\n",
"----\n",
"\n",
"딥러닝 모델을 \"black boxex\"라고 부르기도합니다. 딥러닝이 학습한 특징들을 사람이 이해할만한 형태로 표현하기 힘들기 때문입니다. 물론 이 주장에 특정 딥러닝 모델에서는 어느정도 사실일수도 있지만, 컨볼루션 네트워크의 경우에는 이야기가 달라집니다. 컨볼루션 네트워크로 학습시킨 특징들은 아주 많은 부분에서 시각화 하기 아주 유용합니다. 컨볼루션 네트워크 자체가 _시각적인 특징들_ 을 학습하기 때문입니다. 2013년부터 컨볼루션 네트워크의 특징들을 잘 시각화하고 해석하고자 하는 기술들이 많이 개발되어 왔습니다. 지금 시간에 모든 기술들을 살펴볼 수는 없겠지만 가장 접근하기 쉽고 유용한 세 가지 기법만 한번 다뤄보도록 하겠습니다. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Paragraph 2\n",
"* Visualizing intermediate convnet outputs (\"intermediate activations\"). This is useful to understand how successive convnet layers \n",
"transform their input, and to get a first idea of the meaning of individual convnet filters.\n",
Expand All @@ -57,14 +74,6 @@
"classification problem two sections ago. For the next two methods, we will use the VGG16 model that we introduced in the previous section."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Paragraph 1\n",
"To do"
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand Down

0 comments on commit ba0b356

Please sign in to comment.