问题描述
在自建的conda环境之中使用pip安装会将库安装至base环境下,例如:
1 | Requirement already satisfied: lmdb in ./.local/lib/python3.6/site-packages (1.2.1) |
解决方案
首先切换至自建的conda环境1
conda activate your_conda
使用1
python -m pip install python_package
进行安装
1 | Enumerating objects: 4097, done. |
GitHub默认无法提交超过100M的文件
将 Git 缓冲区大小增加到 repo 的最大单个文件大小
1 | git config --global http.postBuffer 157286400 |
1 | Filename too long |
Git的文件名限制为4096个字符,而在Windows中Git通过msys编译。因此使用较老版本的Windows API,文件名限制为260个字符。
1 | git config --system core.longpaths true |
https://stackoverflow.com/questions/22575662/filename-too-long-in-git-for-windows
Welcome to Hexo! This is your very first post. Check documentation for more info. If you get any problems when using Hexo, you can find the answer in troubleshooting or you can ask me on GitHub.
1 | $ hexo new "My New Post" |
More info: Writing
1 | $ hexo server |
More info: Server
1 | $ hexo generate |
More info: Generating
1 | $ hexo deploy |
More info: Deployment
官方文档: torch.bmm(), version=1.2.0
torch.bmm()
用于实现矩阵的乘法,其引用方式为:
1 | torch.bmm(input, mat2, deterministic = False, out = None) |
除了需要满足input
和mat2
的维度一致,相似于线性代数的矩阵相乘,bmm()
的矩阵乘法也约束了当input
的维度为$(b\times n\times m)$,mat2
的维度应为$(b\times m\times p)$,相乘结果out
维度为$(b\times n\times p)$。
1 | # example |
版本:4.2.0
官方技术文档:hexo
1 | npm install -g cnpm --registry=https://registry.npm.taobao.org |
1 | cd <your_blog_dirname> |
1 | vim <your_blog_dirname>/_config.yml |
1 | # 安装hexo-symbols-count-time |
如果没有配置symbols_count_time
,将会使用默认配置,symbols_count_time
配置如下:
1 | # <your_blog_dirname>/_config.xml |
在Next主题下集成了hexo-symbols-count-time,其配置选项如下:
1 | # <your_blog_dirname>/theme/next/_config.yml |
参考材料:
[1] hexo-symbols-count-time 官方Github
[2] 启用字数和阅读时间预估
对于learninggitbranch学习材料的总结。
1 | # 先创建新分支,再进行切换 |
当我们在新建的分支上完成开发任务后,开发完成需要合并回主线。
假设当前位于master分支,需要合并bugFix分支
1 | git merge bugFix |
对提交的记录进行复制。
假设位于bugFix分支,需要合并到master分支
1 | git rebase master |
1 | # 查看HEAD的指向 |
分离的Head
分离的Head指的是直接让HEAD指向具体的某个提交记录,而不是分支名,可以通过git log获取对应提交记录的哈希值。
1 | # 假设当前HEAD为HEAD -> master -> C0(hash value) |
Git
root
dog
001.png
002.png
…
cat
001.png
002.png
…
1 | import torchvision.datasets as dset |
1 | # transform callable variable example |
1 | # torchvision/datasets/folder.py |
get_image_backend
函数
1 | # torchvision/__init__.py |
Paper link: Perceptual Losses for Real-Time Style Transfer and Super-Resolution, ECCV2016
Authors: Justin Johnson, Alexandre Alahi, Li Fei-Fei
the per-pixel losses used by these methods do not capture perceptual differences between output and ground-truth images.
high-quality images can be generated using perceptual loss functions based not on differences between pixels but instead on differences between high-level image feature representations extracted from pretrained convolutional neural networks.
These approaches produce high-quality images, but are slow since inference requires solving an optimization problem.
we combine the benefits of these two approaches. We train feed forward transformation networks for image transformation tasks, but rather than using per-pixel loss functions depending only on low-level pixel information, we train our networks using perceptual loss functions that depend on high-level features from a pretrained loss network.
feature reconstruction loss $\ell_{feat}^\phi$
style reconstruction loss $\ell_{style}^\phi$
In order to compute efficiently, reshaping $\phi_j(x)$ into a matrix $\psi$ of shape $C_j\times H_jW_j$, and then $G_j^\phi(x)=\psi\psi^T/C_jH_jW_j$
The style reconstruction loss is then the squared Frobenius norm of the difference between the Gram matrices of the output and target images:
pixel Loss
Total Variation Regularization $\ell_{TV}(\hat{y})$
To generate an image $\hat{y}$ that combines the content of a target content image $y_c$ with the style of the a style style image $y_s$.
The take is to generate a high-resolution output image from a low-resolution input
In future work we hope to explore the use of perceptual loss functions for other image transformation tasks, such as colorization and semantic segmentation. We also plan to investigate the use of different loss networks to see whether for example loss networks trained on different tasks or datasets can impart image transformation networks with different types of semantic knowledge.