istio mixer adapter setup

安装troubleshooting

https://github.com/istio/istio/wiki/Mixer-Out-Of-Process-Adapter-Walkthrough

step1 ‘inconsistent vendoring’ error

https://github.com/golang/go/issues/34657
解决的办法就是export GOROOT=单独的go目录,不要go下面的src有istio的那个
export GOROOT=/root/software/go

Step 2 BUILD_WITH_CONTAINER=1 make gen

Just to ensure everything is good, let’s generate the file and build the code. Go to the top of the repo and do:
BUILD_WITH_CONTAINER=1 make gen
实际上是

1
2
3
export BUILD_WITH_CONTAINER=1
cd /root/kube/go/src/istio.io/istio
make gen

不过这个里面有个证书的错误,暂时没有影响,可以再观察

step3 mygrpcadapter

mygrpcadapter.go:40:2: no matching versions for query “latest”

“istio.io/istio/pkg/log” -> “istio.io/pkg/log”

step 5 hang on

第五步中的 pushd $ISTIO/istio && make mixs貌似需要minikube服务是shutdown的时候才可以安装成功,
但是docker daemon要启动

mixc,mixs都在/root/kube/go/src/istio.io/istio/out/linux_amd64/下面,
也就是说在$ISTIO/istio/out/linux_amd64下面,
但是文档里面说的是在$GOPATH/out/linux_amd64/release/

go build

When compiling multiple packages or a single non-main package, build compiles the packages but discards the resulting object, serving only as a check that the packages can be built.

go build编译main package的会生成执行文件,非main的就没有输出,just 验证下

总结

按照这个文章弄好的其实还是需要打包部署成一个kubenetes的service, 不可能每次都是cmd下面的脚本运行adapter,
其实就是把cmd下面的go文件编译一下,部署成kubernetes里面的service。

mygrpcadapter/config下面的.proto文件其实是为了生成mygrpcadapter.yaml,这个文件也是kubernetes的一个资源,需要kubectl apply
至于config/config.proto里面的Params是给sample_operator_cfg.yaml里面的kind: handler使用的,
sample_operator_cfg.yaml里面的instance用了metric template,会封装数据参数传给adapter。
The operator that configures Istio controls how this template-specific data is constructed and dispatched to adapters.
https://github.com/istio/istio/wiki/Mixer-Out-Of-Process-Adapter-Dev-Guide

adapter的rule也会定义规则,比如只允许特定的host访问,
到时候浏览器访问某个service的时候,首先gateway会转发到特定的service,这个是gateway和virtualservice来做的,
adapter这边的rule是另外一码事

使用minikube学习环境

minikube

https://kubernetes.io/docs/setup/learning-environment/minikube/

安装好centos后因为之前设置静态ip的时候对vmware进行了设置,所以现在除了nmtui之外,也必须修改/etc/sysconfig/network-scripts/ifcfg-ens33

yum install -y yum-utils \
device-mapper-persistent-data \
lvm2

yum-config-manager \
–add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

yum install -y docker-ce

systemctl disable firewalld

swapoff -a
vi /etc/fstab
注释掉下面这一行永久关闭swap

#/dev/mapper/centos-swap swap swap defaults 0 0

sysctl net.bridge.bridge-nf-call-iptables=1

minikube start –vm-driver=none(not suggest,可能会有kubernetes版本不被istio支持的问题)
minikube start –memory=4096 –cpus=4 –kubernetes-version=v1.15 –vm-driver=none(–kubernetes-version可以不带,有时候带会有错误)
minikube start –memory=6096 –cpus=4 –vm-driver=none
minikube dashboard –url

minikube start –memory=3072mb –cpus=2 –vm-driver=docker

下面是部署一个sample
kubectl create deployment hello-minikube –image=k8s.gcr.io/echoserver:1.10
kubectl expose deployment hello-minikube –type=NodePort –port=8080
kubectl get pod
minikube service hello-minikube –url
kubectl describe services hello-minikube

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@localhost ~]# kubectl describe services hello-minikube
Name: hello-minikube
Namespace: default
Labels: app=hello-minikube
Annotations: <none>
Selector: app=hello-minikube
Type: NodePort
IP: 10.96.132.162
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31451/TCP
Endpoints: 172.17.0.4:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

Endpoints: 172.17.0.4:8080是内网的
NodePort: 31451/TCP 是外网的

kubectl get namespace
kubectl get pods -n istio-system
kubectl describe pod -n istio-system istio-galley-69674cb559-chxtg
kubectl logs -n istio-system istio-policy-5cdbc47674-q7flx

istio gateway

kubectl get gateway
istioctl -n istio-system pc routes istio-ingressgateway-5db78457f5-h49tm –name http.80 -o json

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"

In the gateway resource, the selector refers to Istio’s default ingress controller by its label, in which the key of the label is istio and the value is ingressgateway

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@localhost ~]# kubectl get pods -listio=ingressgateway -n istio-system
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-649f9646d4-nc88s 1/1 Running 0 10h

感觉像是默认是要去和gateway resource同一个namespace下面去找gateway controller pod,找不到就找istio-system下面这个

https://stackoverflow.com/questions/51835752/how-to-create-custom-istio-ingress-gateway-controller
上面这个文章讲的比较详细,如果不是默认的istio: ingressgateway的话,需要gateway resource和gateway controller的namespace一致,
或者可以理解为gateway resource的默认namespace就是istio-system

https://discuss.istio.io/t/istio-ingressgateway-controller-and-namespaces/1217/5 的一个回答
In 1.1 this is changing a bit - the Gateway resource should be in same namespace as the gateway(the service, deployment, certificates). The default is istio-system - but you can run it in other namespace, or in multiple namespaces if you need to. Each namespace will have a different load balancer IP and may handle different domains with different certs.






https://blog.jayway.com/2018/10/22/understanding-istio-ingress-gateway-in-kubernetes/

但是对于官网下面的话不理解
One or more labels that indicate a specific set of pods/VMs on which this gateway configuration should be applied. The scope of label search is restricted to the configuration namespace in which the the resource is present. In other words, the Gateway resource must reside in the same namespace as the gateway workload instance.

最后这里总结下:
gateway resource和gateway controller pod是两个东西,前者可以通过kubeclt get gateway获得,后者其实就是一个pod,
我们平常一般创建gateway resource的时候默认没有指定namespace的时候是默认为isito-system,比如istio: ingressgateway,他会去
istio-system这个namespace下面去找label为istio: ingressgateway的gateway pod
如果要自己创建一个自定义的gateway controller pod的话,参考https://stackoverflow.com/questions/51835752/how-to-create-custom-istio-ingress-gateway-controller,简单说就是需要创建一个新的gateway pod资源,这个资源会在某个namespace下面,
然后在创建gateway resource并且指定他的namespace,同时指定selector匹配gateway pod资源

https://istio.io/blog/2019/custom-ingress-gateway/这个文章的第七点说明type: LoadBalancer可以创建一个gateway controller pod,
在上面的so文章也有人回答‘In fact, it is very simple. Istio’s ingress is just a regular Kubernetes Service of “Load Balancer” type.’
但是有待验证

关于gateway,这个文章https://istio.io/docs/tasks/traffic-management/ingress/ingress-control/最好,
在minikube上面其实就是默认的ingress gateway通过nodeport暴露出来,所以访问具体的service其实是访问的这个gateway,
ingress说的其实就是这个gateway。
VirtualService里面的hosts应该说的重复字段,实际上destination里也有这个字段
https://blog.csdn.net/kozazyh/article/details/81477629这里就说明了这个重复

troubleshooting

network issue due to iptables

kubectl get pods -n istio-system
kubectl get pods -n kube-system
上面两个namespace下面的pod如果有下面的问题的话,就刷新iptables
none: coredns CrashLoopBackOff: dial tcp ip:443: connect: no route to host #4350

systemctl stop kubelet
systemctl stop docker
iptables –flush
iptables -tnat –flush
systemctl start kubelet
systemctl start docker

multiply container due to the istio

When retrieving logs for pods that have multiple containers, you need to specify the container you want the logs for.
For example:

1
kubectl logs productpage-v1-84f77f8747-8zklx -c productpage

The reason is because Istio adds a second container to deployments via the istio-sidecar-injector. This is what adds the envoy proxy in front of all of your pods traffic.

why kube-dns and core-dns both exist

kubectl get deployments -n kube-system
kubectl get pods -n kube-public
上面得到coredns
kubectl get svc -n kube-system
上面得到kube-dns
但是如果kubectl describe -n kube-system service/kube-dns就会发现
他的Selector: k8s-app=kube-dns其实就是指向kubectl get pods -lk8s-app=kube-dns -n kube-system

reference

https://istio.io/docs/examples/bookinfo/
https://www.cnblogs.com/psy-code/p/9311104.html
kubectl get node -o wide

java servlet

这个文章主要说明servlet2.5, 3.0, 3.1的差异

servlet2.5

tomcat-nio
tomcat本身在servlet2.5也就是tomcat6的时候就已经实现了nio,但是仅仅在建立连接和解析request header方面nio,这个在上图可以看的很清楚

servlet3.0

在servlet3.0就是tomcat7以后,实现了异步的servlet,但是这个异步仅仅是对request的处理是异步的,就是将这次请求走到servlet的时候放到一个单独的线程池,不要占用container本身处理请求的线程池。这个感觉提升并不大,因为对request stream io本身还是普通的io, Servlet 3.0 allowed asynchronous request processing but only traditional I/O was permitted. In other words, with Servlet 3.0, only the request processing part became async, but not the I/O for serving the requests and responses. If enough threads block, this results in thread starvation and affects performance. 3.0只能叫异步,3.1才是nio

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
import javax.servlet.AsyncContext;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;

@WebServlet(value = "/simpleAsync", asyncSupported = true)
public class SimpleAsyncHelloServlet extends HttpServlet {

protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
AsyncContext asyncContext = request.startAsync();

asyncContext.start(() -> {
new LongRunningProcess().run();
try {
asyncContext.getResponse().getWriter().write("Hello World!");
} catch (IOException e) {
e.printStackTrace();
}
asyncContext.complete();
});

}

}

此时,我们先通过request.startAsync()获取到该请求对应的AsyncContext,然后调用AsyncContext的start()方法进行异步处理,处理完毕后需要调用complete()方法告知Servlet容器。start()方法会向Servlet容器另外申请一个新的线程(可以是从Servlet容器中已有的主线程池获取,也可以另外维护一个线程池,不同容器实现可能不一样),然后在这个新的线程中继续处理请求,而原先的线程将被回收到主线程池中。事实上,这种方式对性能的改进不大,因为如果新的线程和初始线程共享同一个线程池的话,相当于闲置下了一个线程,但同时又占用了另一个线程。

servlet3.1

在servlet3.1后,就是tomcat8以后实现了non blocking io, With Servlet 3.1 NIO, this problem is solved by ReadListener and WriteListener interfaces. These are registered in ServletInputStream and ServletOutputStream. The listeners have callback methods that are invoked when the content is available to be read or can be written without the servlet container blocking on the I/O threads. So these I/O threads are freed up and can now serve other request increasing performance.
参考例子 https://github.com/mengxu2018/java-code-sample/tree/master/servlet3.1

ref

https://www.cnblogs.com/davenkin/p/async-servlet.html
servlet-spec

java exception

{
status: 1
message: “转账成功”
data: {} //业务成功有时候会数据返回到前台
}
{
status: 1
message: “您的余额不足” //余额不足可以当作正常业务流程提示用户
data: {} //业务成功有时候会数据返回到前台
}
{
status: 1
message: “您的套餐三个月内不能修改” //套餐不能修改可以当作正常业务流程提示用户
data: {} //业务成功有时候会数据返回到前台
}
{
status: 0
errorCode: 1001
errorMessage: “业务处理出现错误,请稍后再尝试” //数据库错误,socket错误等不应该让用户看到
exception: java.xx.socketException,etc
}

但是有的业务是需要按照异常给用户的,比如 EmailNotUniqueException, InvalidUserStateException,需要自定义异常https://stackabuse.com/how-to-make-custom-exceptions-in-java/

http://literatejava.com/exceptions/checked-exceptions-javas-biggest-mistake/
这个文章讲解了checked异常的不好的地方

https://stackoverflow.com/questions/7561550/list-of-spring-runtime-exceptions

spring内部基本都是用的unchecked exception, 因为uncheck exception不需要用户catch,如果有异常就直接输出到日志了,比如说下面的空指针

1
2
3
4
5
6
java.lang.NullPointerException: null
 at com.ssc.rest.controller.ManagerControllerMiddleware.delete(ManagerControllerMiddleware.java:61)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:497)

这段话会被输出到日志文件

一般业务系统的异常都是需要自定义异常,既然自定义异常,那么余额为0或者套餐变更要求不符合还会当作异常被抛出到spring全局处理器然后给js返回错误的数据格式{errormsg:xx, errorCode:xx}?
应该是不建议的,定义这种类别的自定义异常不是好的方法,但是不绝对,
一般自定义的异常,比如用户名不唯一,邮件格式不对等

es6 promise

The arguments to then are optional, and catch(failureCallback) is short for then(null, failureCallback). You might see this expressed with arrow functions instead:

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Using_promises

下面这个代码可以直接控制台执行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
new Promise((resolve, reject) => {
 console.log('Initial');
 throw new Error('Something failed');
 // resolve();
})
.then(() => {
 // throw new Error('Something failed'); 
 console.log('Do this');
})
.then(null,
 () => {
 // throw new Error('Something failed'); 
 console.log('Do that');
 }
 )
.then(() => {
 console.log('Do this, no matter what happened before');
});

上面这个代码等同于

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
new Promise((resolve, reject) => {
 console.log('Initial');
 throw new Error('Something failed');
 // resolve();
})
.then(() => {
 // throw new Error('Something failed'); 
 console.log('Do this');
})
.catch(() => {
 console.error('Do that');
})
.then(() => {
 console.log('Do this, no matter what happened before');
});

When you return something from a then() callback, it’s a bit magic. If you return a value, the next then() is called with that value. However, if you return something promise-like, the next then() waits on it, and is only called when that promise settles (succeeds/fails).

1
2
3
4
5
6
Promise.resolve('foo').
 then(() => {return "bar"}).
 then((v) => console.log(v))
Promise.resolve('foo').
 then(() => {return Promise.resolve('bar')}).
 then((v) => console.log(v))

https://stackoverflow.com/questions/34094806/return-from-a-promise-then

jvm classloader

https://tomcat.apache.org/tomcat-9.0-doc/class-loader-howto.html

关于Sharedclassloader 有个没说清楚, webappclassloader会先从自己的repository寻找,找不到才会从Sharedclassloader寻找

log4j:ERROR A “org.apache.log4j.DailyRollingFileAppender” object is not assignable to a “org.apache.log4j.Appender” variable.
log4j:ERROR The class “org.apache.log4j.Appender” was loaded by
log4j:ERROR [java.net.URLClassLoader@2f0a098b] whereas object of type
log4j:ERROR “org.apache.log4j.DailyRollingFileAppender” was loaded by [WebappClassLoader
context: /cdt4middleware
delegate: false
repositories:
/WEB-INF/classes/
 - - - - → Parent Classloader:
java.net.URLClassLoader@2f0a098b
].
log4j:ERROR Could not instantiate appender named “RollingAppender”.

这个错误是因为webapp下面有个log4j,而且sharedclassloader下面也有个log4j
因为osa框架里面的类是由sharedclassloader加载的,他如果事先加载了org.apache.log4j.Appender,而后如果DailyRollingFileAppender被webappclassloader加载的话就会报这个错误了

jdk8 new features

为什么jdk8的interface要引入default method?

To give you an example take the case of the Collection.forEach method, which is designed to take an instance of the Consumer functional interface and has a default implementation in the Collection interface:

1
2
3
4
5
6
default void forEach(Consumer<? super T> action) {
Objects.requireNonNull(action);
for (T t : this) {
action.accept(t);
}
}

If the JDK designers didn’t introduce the concept of default methods then all the implementing classes of the Collection interface would have to implement the forEach method so it would be problematic to switch to Java - 8 without breaking your code.

So to facilitate the adoption of lambdas and the use of the new functional interfaces like Consumer, Supplier, Predicate, etc. the JDK designers introduced the concept of default methods to provide backward compatibility and it is now easier to switch to Java - 8 without making any changes.

If you don’t like the default implementation in the interface you can override it and supply your own.

java线程池

https://www.baeldung.com/java-runnable-callable
Runnable和Callable区别

时间日期问题

https://gitee.com/mengxu2018/jdk-new-feature/tree/master/jdk8-datetime

如果用jpa可以看下面文章,老的jdk一般用java.sql.Timestamp,可以看到这个是sql这个package下面的
https://www.baeldung.com/jpa-java-time